Cellular Nonlinear Networks

Cellular nonlinear networks are sort of a type of neural network that are typically built into hardware to do image processing tasks. They were first theorized by the electrical engineer Leon Chua, the same man who first theorized the memristor. These types of networks can perform many different image processing tasks just using 19 numbers. The network has as many nodes as there are pixels in the input image, and, over time, each node evolves to have a certain value.

Each set of 19 numbers is hand-engineered to do something different. We first begin by importing matplotlib (for viewing the images), numpy (for performing matrix operations), scipy.misc (for reading and resizing images), and scipy.signal (for the convolution part of the network).

In [1]:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.misc import *
import sys
from scipy.signal import convolve2d
/usr/local/lib/python2.7/dist-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
  warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')

The network essentially the tanh function as the activation function for its nodes each iteration, although other activation functions are possible. To see how each input is mapped to the output of this activation function, see the figure below:

In [2]:
r=np.random.randn(500)
fig=plt.figure()
ax1=fig.add_subplot(111)
ax1.scatter(r, np.tanh(r))
ax1.set_aspect('equal')
ax1.grid(True, which='both')
ax1.axhline(y=0, color='k')
ax1.axvline(x=0, color='k')
tanh

Chua calls the 19 numbers a gene, as he works off of the biological paradigm. Just like in a cell, this string of numbers determine how the network behaves. There are many different genes that can do different things. This gene is to detect edges. The first element in the gene is called the bias, Z. The 2nd through the 10th numbers in the gene comprise the 3 x 3 input weight kernel, B. The remaining numbers are the 3 x 3 inhibitory weight kernel, as each node is inhibited by its immediate neighbors.

In [3]:
gene=np.array([-0.5, -1, -1, -1, -1, 8, -1, -1, -1, -1, 0, 0, 0, 0, 2, 0, 0, 0, 0])

The network exists here as a python class that takes in three arguments: 1) a gene, 2) an input image, 3) and the number of iterations for the network. The network dynamics are described in the function network. The first thing to do is scale the brightness values in the image to have a mean of 0 and a standard deviation of 1 and assign x0 to the initial state of the network (which is all zeros before it receives input). We then take advantage of scipy’s convolution2d function that will slide the input weight matrix B0 across every pixel and multiply that region of the image by the values in it. Since the input never changes, we can just do this once before the loop. Then for a number of iterations, A, the inhibitory matrix, slides across all of the pixels and computes the inhibition on the center pixel from each of its closest neighbors. Adding in the inputs and bias each iteration gives us the activation of each node. Before beginning the next iteration, the activations are sent through the activation function above, and that’s all there is to it.

In [4]:
class ChuaNetwork(object):
  ''' A software implementation of a cellular neural network developed by Leon Chua and others.

      Args:
           gene:        a vector with 19 elements, starting with the Z value, 
                        then the B filter, then the A filter. 

           input_image: a RGB or grayscale image.    

	   iterations:  number of desired iterations for the gene to operate
                        on the input_image. '''
              

  def __init__(self, input_image, gene, iterations):
    
    if len(input_image.shape)>2: 
      self.im=np.mean(input_image, axis=2)
    else:
      self.im=input_image
    self.orig_im=input_image
    self.network
    self.iters=iterations
    self.Z=gene[0]
    self.B=gene[1:10].reshape([3, 3])
    self.A=gene[10:].reshape([3, 3])

  

  def network(self): 
    self.im=(self.im-np.mean(self.im))/(np.std(self.im)+1e-6)
    x0=0.0*self.im
    dt=0.1
    B0=convolve2d(x0, self.B, 'same')
    for iters in range(self.iters):
      dx=-self.im+convolve2d(np.tanh(self.im), self.A, 'same')+B0+self.Z
      self.im=self.im+dx*dt
    
    fig=plt.figure()

    ax1=fig.add_subplot(121)
    ax1.imshow(self.orig_im)
    ax1.set_xlabel('Original Image')

    ax2=fig.add_subplot(122)
    ax2.imshow(self.im, cmap='gray')
    ax2.set_xlabel('Network Output')
    plt.show()

Here, we read in the image with scipy’s imread function. A picture of a road is used to illustrate the utility of these networks to computer vision tasks such as autonomous driving.

In [5]:
img=imread('road.jpg')

To run the network, it first has to be instantiated as an object. Once that is done, we then run the network in the CNN object.

In [23]:
CNN=ChuaNetwork(imresize(img, [300, 300]), gene, 65)
CNN.network()
road

As you can see, there is a lot less information in the output image than in the original image, but, all of the important information is still there. This could allow for faster learning in certain problems (such as staying in between the lanes while driving) if this network were placed at the front end of a learning algorithm. If this edge detecting network was going to be used in training an autonomous car, we could even crop the image and use only the bottom half of the frame since the sky doesn’t have anything to do with driving a car.

cropped_road

 

For more information on cellular nonlinear networks, check out Chua’s HP lectures. They’re seriously fantastic.