Background on Magnetic Disks

 Background on Neural Networks

 High-Desity Magnetic Disks and Neural Networks


Background on Magnetic Disks

A magnetic disk uses magnetic fields to store and retrieve information. ( In contrast, CD-ROMs use light to retrieve information.)  Bits are stored and retrieved through a read-write head which floats above the surface of the spinning magnetic disk.

During data storage, the changing voltage in the read-write head makes a record of the data on the magnetic surface of the disk.  During data retrieval, a changing voltage is induced in the read-write head by the spinning magnetic fields on the disk surface.

A voltage "spike" is generated for each change in the bits. For example, when reading the bits 00000111111 only one spike is generated (at the location the 0s stop and the 1s begin). The direction of the spike indicates a change from 0 to 1, or from 1 to 0.

The graphic below shows a measured induced voltage with three spikes. You can see the bit pattern by moving the mouse pointer over the "Show Bit Pattern" image.
 

Simple Induced Read Voltage Show Bits


Background on Neural Networks

Neural networks borrow ideas from human information processing. Information is represented by numbers and processed in parallel. The emphasis is on what is learned and not on how it is represented internally.  (Quickly, how much is 3*4? Now ask your friend. Both of you came up with the same answer, that's what counts. How your brain did it, is not important. And chances are good that your friend's brain did it differently.)

The basic structure of a neural network is given in the picture below.

Example Neural NetworkEach box is a processing element. The number on each box (and its size) indicate the  value of each element. Each line is a connection used to pass numeric information from one processing element to another. The number on each line indicates the strength of the connection, which influences how much information is passed on. The processing elements on the very left are the input elements, the one on the right is the output element and the ones in the middle are called "hidden" elements.

Let's assume we want to have a neural network which can process electrocardiogram (ECG) information and determine whether the patient had a light heart attack.  We need to collect many different ECGs. For each ECG we also need an expert to determine whether the patient had a heart attack or not.  We take the collected information and present it to the network: with this ECG the answer is NO; with that ECG the answer is YES;  with yonder ECG the answer is....... What does the network do? It guesses the answers and attempts to adjust its internal workings such that it gets more and more answers right. In other words, it learns from the examples presented. (This type of learning where input and output is presented is called "supervised" learning.)
 

How does a neural network learn?

There are many different methods for a neural network to learn. I describe the basic principle behind many of them.  Mathematically, the error the neural network makes can be described in terms of the strength of the connections between the elements. The goal is to reduce the error as much as possible. Intuitively, you can think of the error as a mountain range and the goal is to get back to the cozy, warm cottage in the valley. What do you do when you're stuck in the Rocky Mountains? You go downhill. And that's what the network does: it moves downhill in the "error landscape", thus reducing the overall error.

Error LandscapeThe example picture to the left shows an error landscape for just one connection. If you are the red dot, going downhill will help you get to the valley. If you are the blue dot, however, then going downhill will only lead you into a ravine. Of course, clever as you are, you won't stay there, but climb up the other side to eventually end up in the valley. Neural network learning mechanisms must do the same. However, there is an inherent risk in such behavior: since it is not known where the smallest error is, the network might move away from it. In our little example, imagine the real valley to be to the left of the image. Moving to the right will only get you to the lowest spot in the picture, but not the lowest spot in the mountains.
 


High-Desity Magnetic Disks and Neural Networks


When the bits are recorded closer and closer to each other on the magnetic disk, the individual read "spikes" start to overlap. Moreover, there is a non-linear interferences among the individual spikes.  On top of this there are distortions in the induced voltage due to the movements of the read-write head (its distance from the surface varies slightly and it oscillates slightly from left to right). And some random background noise also makes the pattern harder to recognize.

Below is a picture of a readsignal. The bits are recorded with a high-density and are not that easy to detect. The left part of the signal is still easy to decode. Try to determine the sequence of bits for the given signal. There are six bits between dashed lines. If you want to know the recorded bits move the cursor over the "Show Bit Pattern" area.
 
High-Density Induced Voltage Show Bits

Since recognizing the individual bits becomes hard, a neural network was trained to learn to detect a bit correctly. The input to the network are measured voltages (for example 15 value), the output whether the bit in the middle (corresponding to the input value 8)  is 0 or 1.  Several neural network architectures have been tried and show promise.


For this research I use the neural network simulator SNNS   which was developed at the University of Stuttgart in Germany. It's available for free, including source code. I run it from LINUX, which I installed on my computer.