Artificial Intelligence on a C64

I recently read an article about how scientists from the University of California, Berkeley, were able to visualize what a person is thinking about by means of a computer.
Basically, a person was shown a series of images and a powerful computer would monitor the areas of the brain that became active. It would “learn” and associate the patterns in the brain with what the person was watching.
By then showing the person a series of new images, the computer had to figure out what the image was. It used a large database of YouTube videos and with these attempted to create an image of what the person was seeing. The results were exciting and spooky at the same time, as the computer would actually visualize (although blurry) what the person was seeing.

This sort of pattern analysis or “associative memory”, so called because it recalls items based on similarity, is making the computer “think” in patterns rather than just look for an exact match of a variable.  Basically, it is a form of AI or a computer thinking as a human being.

The concept of a computer thinking the same way as a human being, revolves around the concept of neural networks.
In our brains, we don’t simply store exact representations of reality (i.e. sounds, images, smells, …) but we store patterns. These patterns are in the portion of the brain which is responsible for thought and memory, which consists primarily of nerve cells, or neurons.
Each neuron has three parts: dendrites, a cell body, and an axon. The dendrites connect to the axons of other neurons and when these other neurons are stimulated, the dendrites convey the signal to the cell body via a synapse or connection, which either excites or inhibits the neuron (with a different strength for each synapse). When the excitation sufficiently outweighs inhibition, the neuron sends a signal down its axon which in turn excites or inhibits other neurons, and perhaps causes a muscle to move.

Because neurons primarily connect to other neurons they form networks of great complexity.
Let’s take for instance two groups of five neurons each, in which each neuron connects to every neuron in the other group. In this simple case we have 10 neurons with 5 connections (or synapses) each, for a total of 50 synapses.
Researchers believe that the brain contains between a 100 billion neurons or, each of which connects to anywhere from a 1.000 to a 100.000 other neurons, forming at least a 100 trillion connections, and probably far more.
Compare this to the computation power of today’s most powerful computers, and let’s assume that the transistor is comparable to a neuron, then you’ll realize that we’re just about getting the “same” computing power as our brains (the most powerful computer, if memory serves me well – no pun intended – is China’s Tianhe-1A (Milky Way) supercomputer, which is capable of 2.500 trillion calculations per second).

Now, a simple computer like the Commodore 64 surely isn’t capable of simulating the brain, right? It only has a handful of transistors compared to the supercomputers of our early 21st century, so such a comparison or experiment is totally out of question, right?

Wrong and wrong!

When I was at university, one of our professors who was literally obsessed with AI, chaos theory and black holes lectured on the subject of neural networks. One of the algorithms we studied was the backpropagation algorithm. I remember most of the other students putting this algorithm to work on their 386/486 machines of the day, but I wanted to do this on my Commodore 64. I managed to create a program that would use this algorithm to simulate a neural network on al old machine from the 80s and was quite surprised that although the processing was slow, it did produce the expected results!
I’ve searched high and low on all my old disks, but sadly have not been able to retrieve it. If I do find it, I’ll put it up on the site.
Luckily though, I wasn’t the only one who apparently was keen on getting a neural network running on the old hardware.

John Walker of Fourmilab has done something quite similar and created a program for the C64, completely written in BASIC (!!) that is capable of simulating the basic workings of the brain and effectively use a neural network topology, in the mere 64K of your breadbox (queue “The Twilight Zone” music).
This simple program can simulate the behavior of a network of interconnected neurons, which can see patterns and remember them.
When a similar pattern is then fed into the neural network, it will find the pattern it’s learned that is most like the pattern it’s shown.
Anxious to test this out on your own C64? Well, then first get a copy of the program here and fire it up.

Once it’s up and running, it will display 2 windows with a menu underneath explaining the basic functions of the program.
When you type a letter or number, the dot pattern for that symbol appears in the left window. Try typing a few letters and numbers to see how this works.  Now we’re ready to train the neural network by starting to make it learn a pattern.

Type a letter and press F1. The program trains its simulated neurons to memorize the pattern (this takes about 30 seconds) and READY reappears on the screen when the pattern has been learned.
Teach the program three different-looking letters, say “A”, “T”, and “Z”.
Now let’s try recalling a pattern. Press the “A” key to place an “A” in the left window. Now press F3—this introduces errors in the pattern by randomly changing about 10% of the dots in the pattern each time you press it. After you press F3 a couple of times, you’ll have a pattern that looks something like an “A”, but doesn’t exactly match what we taught the program. Press F5 to start the recall process. The pattern is run back and forth through the neuron network until it stabilizes on a fixed pattern (an arrow in the middle of the screen shows the direction of the transfer). After the neuron network has “thought” about the problem for a few cycles, you’ll probably get back the original “A” we taught the program.

But just like the brain, this process doesn’t always remember the right thing, i.e. if the random changes made the pattern more like another pattern the program has learned, that one will be found instead.
Try this by entering “I” and try recalling with F5. The network recognizes this as T because “I” looks more like “T” than “A” or “Z”, the other patterns it has learned. This is what the human brain does when it sees a pattern and immediately associates this to a pattern it has seen/learned before. Many researchers think the basic process the brain uses is much the same as the one used by this program.

Pretty cool for a 25 year old machine!

Bonus: For the techy’s who want to know how the neural network is coded, here’s the code review (you can find the listing here).

The program simulates two fields of neurons with the arrays F1% and F2%, and displays these fields in the two windows on the screen. When you type a letter or number, the dot pattern for that symbol is read from the character ROM and stored in F1%. Lighted dots are stored as 1 and background dots as −1. The character patterns are 6 dots wide by 7 dots high, so F1% and F2% both hold 42 numbers.
Each neuron in a field potentially connects to every neuron in the other field. Each connection, which is equivalent to a synapse in the brain, has its own weight: positive to excite, negative to inhibit, and zero if there is no connection. These weights are stored in the 42×42 matrix M%, for a total of 1,764 connections. To learn a pattern, we form a matrix from the pattern in F1% and add it to the weight matrix M% (see lines 1020–1060 in the program). To recall a pattern we take the pattern in F1% and multiply it by the weight matrix (lines 1410–1480). If the value is 1 or more, we place a 1 in that position of F2%; if it’s negative, we store a −1 there. If the value is zero, we leave the value in F2% alone. Then we take the value in F2% and multiply it back through the matrix, but swapping rows and columns, and store it back into F1% (lines 1540–1610). We keep this up until the pattern in F1% stops changing. That’s our final value, the pattern we’ve recalled from the network.

Share This Post

DeliciousDiggGoogleStumbleuponRedditTechnoratiYahooBloggerMyspaceRSS

5 Responses to Artificial Intelligence on a C64

  1. Pingback: Tech Rewind: The Commodore 64 | The Geek Post

  2. Pingback: Crazy numbers on the C64 | A Commodore Geek's Blog

  3. Pingback: Origins of a Commodore geek | MOS 6502

  4. Unfortunately, the program display scrolls up as it reports status making it a sloppy display. I have made a few changes to the listing to rectify this issue. Nothing too fancify, just enough to get it functioning properly. First I added a simple for-next loop to clear the screen and scroll to the bottom (lines 91-93). But primarily I’ve added a cursor position routine prior to posting the program’s status in lines 980-984. It appears this portion of the program was not complete. A routine was originally added here by the author but didn’t actually do anything (basically a program stub). Here’s the modified listing:

    10 rem screen configuration
    20 poke 53280,13
    30 poke 53281,6
    40 print “”;
    50 open 15,8,15
    60 rem variable declarations
    70 dim f1%(42),f2%(42),m%(42,42)
    80 dim v%,j,i
    90 rem initialise screen
    91 for x = 1 to 23
    92 print
    93 next x
    100 print “”;
    110 print ” neuron network associative memory”
    120 print
    130 print “”;
    140 print “f1 – teach pattern “;
    150 print “f2 – dump matrix”
    160 print “f3 – randomize pattern “;
    170 print “f4 – forget all”
    180 print “f5 – recall pattern “;
    190 print “f6 – quit”
    200 print “f7 – disc save “;
    210 print “f8 – disc load”
    220 print
    230 print “a-z, 0-9: load pattern”
    240 r1 = 4 : c1 = 5 : gosub 600
    250 r1 = 4 : c1 = 25 : gosub 600
    260 gosub 750
    270 gosub 860
    280 gosub 970:print “ready ”
    290 get a$ : if a$=”” goto 290
    300 gosub 970:print ” ”
    310 k=asc(a$)
    320 ifa$>=”0″anda$<="9"thenk=k+64:goto340
    330 if a$ “z” then 500
    340 gosub 970:print “fetch “;a$
    350 l%=0
    360 k=(k-64)*8+53248
    370 poke56333,127:poke 1,peek(1)and251
    380 fori=0to6:poke49408+i,peek(k+i):next
    390 poke 1,peek(1) or 4:poke 56333,129
    400 for i = 0 to 6
    410 j% = peek(49408+i)/2
    420 for k=1 to 6
    430 l%=l%+1
    440 f1%(l%) = -1 + (2 * (j% and 1))
    450 j%=j%/2
    460 next k
    470 next i
    480 gosub 750 : gosub 860 : goto 280
    490 rem dispatch function key commands
    500 j%=asc(a$)-132
    510 if j%=1 then gosub 1000:goto 280
    520 if j%=5 then gosub 1080:goto 90
    530 if j%=2 then gosub 1210:goto 280
    540 if j%=6 then gosub 1680:goto 280
    550 if j%=3 then gosub 1290:goto 280
    560 if j%=7 then print “”;:close15:end
    570 if j%=4 then gosub 1800:goto 90
    580 if j%=8 then gosub 1990:goto 90
    590 go to 280
    600 rem draw borders for fields
    610 for i=0 to 1
    620 v=1024+40*(r1+(i*8))+c1
    630 poke v,112+(-3*i)
    640 for j=1 to 8
    650 poke v+j,67
    660 next j
    670 poke v+9,110+(15*i)
    680 next i
    690 for i=1 to 7
    700 v=1024+40*(r1+i)+c1
    710 poke v,93
    720 poke v+9,93
    730 next i
    740 return
    750 rem update field f2% on screen
    760 l%=0
    770 for i=0 to 6
    780 v% = 1024+40*(i+5)+6
    790 for j=2 to 7
    800 l%=l%+1
    810 iff1%(l%)=1thenpokev%+(8-j),81:goto830
    820 poke v%+(8-j),32
    830 next j
    840 next i
    850 return
    860 rem update field f1% on screen
    870 l%=0
    880 for i = 0 to 6
    890 v%=1024+40*(i+5)+26
    900 for j=2 to 7
    910 l%=l%+1
    920 if f2%(l%)=1 then poke v%+(8-j),81:goto 940
    930 poke v%+(8-j),32
    940 next j
    950 next i
    960 return
    970 rem position to status area
    980 poke 781,23
    981 poke 782,28
    982 poke 783,0
    984 sys 65520
    990 return
    1000 rem train on pattern in f1%
    1010 gosub 970:print “training”
    1020 for i = 1 to 42
    1030 for j = 1 to 42
    1040 m%(i,j)=m%(i,j)+f1%(i)*f1%(j)
    1050 next j
    1060 next i
    1070 return
    1080 rem print part of matrix
    1090 print “”;
    1100 for i=1 to 24
    1110 for j=1 to 39
    1120 ifm%(i,j) 0.1 then 1260
    1250 f1%(i)=-f1%(i)
    1260 next i
    1270 gosub 750
    1280 return
    1290 rem recall from pattern
    1300 gosub 970:print “recall”
    1310 p%=1024+40*9+19
    1320 rem initially copy f1 to f2
    1330 poke p%+1,asc(“=”)
    1340 for i=1 to 42
    1350 f2%(i)=f1%(i)
    1360 next i
    1370 gosub 860
    1380 rem f1 to f2 pass
    1390 poke p%,asc(“=”)
    1400 poke p%+2,asc(“>”)
    1410 for j=1 to 42
    1420 v%=0
    1430 for i=1 to 42
    1440 v%=v%+f1%(i)*m%(i,j)
    1450 next i
    1460 v%=sgn(v%)
    1470 if v%0 then f2%(j)=v%
    1480 next j
    1490 gosub 860
    1500 rem f2 to f1 pass
    1510 c%=0
    1520 poke p%,asc(“<")
    1530 poke p%+2,asc("=")
    1540 for i=1 to 42
    1550 v%=0
    1560 for j=1 to 42
    1570 v%=v%+f2%(j)*m%(i,j)
    1580 next j
    1590 v%=sgn(v%)
    1600 ifv%0andv%f1%(i)thenf1%(i)=v%:c%=1
    1610 next i
    1620 gosub 750
    1630 if c%0 goto 1380
    1640 poke p%,asc(” “)
    1650 poke p%+1,asc(” “)
    1660 poke p%+2,asc(” “)
    1670 return
    1680 rem forget all – clear memory
    1690 gosub 970:print “forget”
    1700 for i=1 to 42
    1710 f1%(i)=0
    1720 f2%(i)=0
    1730 for j=1 to 42
    1740 m%(i,j)=0
    1750 next j
    1760 next i
    1770 gosub 750
    1780 gosub 860
    1790 return
    1800 rem save state to disc file
    1810 gosub 970:print “save”
    1820 print “”;
    1830 input “file name: “;a$
    1840 a$=”@0:”+a$+”,s,w”
    1850 open 5,8,5,a$
    1860 for i=1 to 42:print#5,f1%(i):next
    1870 gosub 2240
    1880 for i=1 to 42:print#5,f2%(i):next
    1890 gosub 2240
    1900 for i=1 to 42
    1910 for j=1 to 42
    1920 print#5,m%(i,j)
    1930 next j
    1940 gosub 2240
    1950 next i
    1960 close 5
    1970 print “”;
    1980 return
    1990 rem restore state from disc file
    2000 gosub 970:print “restore”
    2010 print “”;
    2020 input “file name: “;a$
    2030 a$=”@0:”+a$+”,s,r”
    2040 p%=asc(“m”)
    2050 gosub 2240
    2060 open 5,8,5,a$
    2070 for i=1 to 42
    2080 input#5,f1%(i)
    2090 next i
    2100 gosub 2240
    2110 for i=1 to 42
    2120 input#5,f2%(i)
    2130 next i
    2140 gosub 2240
    2150 for i=1 to 42
    2160 for j=1 to 42
    2170 input#5,m%(i,j)
    2180 next j
    2190 gosub 2240
    2200 next i
    2210 close 5
    2220 return
    2230 rem disc error check
    2240 input#15,en,em$,et,es
    2250 if en>0then print en,em$,et,es:stop
    2260 return

  5. Scott Kemp, thanks for modifying the program.

    The code in your comment did not work “out of the box” because some spaces were missing and quotation marks were (surely, automatically) converted to typographic symbols. But I was able to insert the new parts from your version into the original one, and it worked.

Leave a Reply

Your email address will not be published. Required fields are marked *