And now for a more direct approach...
By Terry Newton
BEAM
is nice and simple, but after making a couple of walkers
and a Solarbotics "photopopper"
I'm left with the feeling that much more will be needed to make these devices
useful in the real world. True, I don't have to clear off as much stuff
from the coffee table for the walkers, however I also have to watch them
carefully lest they fall off and break, forcing me to keep them either
pinned up in a box or powered down. The photopopper is a more survivable
design, but is dumb and slow and doesn't do enough to keep me interested.
Don't get me wrong, walkers and photopoppers are cool things and you will
definitely learn about efficient power management by building them (plus
they're cheap!), but they don't do the kinds of things I have come to associate
with advanced autonomous robotics. Simple walkers don't steer very well,
making autonomous control hit-or-miss at best, and photopoppers have no
brains and move too slowly in my dim basement light. The most "intelligent"
robot I have is a hacked OWI kit running a variation
of the old Heiserman algorithm. Too bad it is sidelined with stripped gears.
What I need to do now is take what I've learned from BEAM and make a tiny
and robust photopopper-style robot that has brains, memory, and preferably
lightweight enough to fly should it take a leap.
This new robot is designed to capture the survival space of the photopopper but be reversible, smart enough to interpret senses, look up memories, and modify those memories as needed, and maintain a much larger reserve of power so that in addition to moving an inch at a time at a rate determined by light level it can move at much higher rates, even continuously, then make up for the loss by sleeping. This way it can still do its thing autonomously but also put on a better show and provide enough movement to perform meaningful experiments without having to be in direct sunlight. It can sit still all day for all I care as long as it works when I or something else in its environment interacts with it.
Basic Specs and Parts... (all subject to change, and does)
Weight - 1.2 oz approx
Power - 3733 solar cell, 1N5817 diode, 1F AL gold super-cap by 2 (series),
470uF and 0.1uF caps
Locomotion - pair of 1701 Pager motors from Solarbotics
Motor drivers - Zetex H-bridge (digikey ZHB6718CT) by 2, 1K by 8, 2.2K
by 4, 1N914 by 4
Brains - a PIC16C56 18-pin microcontroller, 10K/27pF clock
Long-term memory - 24LC65 8-pin eeprom chip, 47K by 2 pullups
Senses - 1381-L 3-pin voltage monitor, two CdS photocells, two "spring"
feelers
Interface - 0.1uF by 2, 0.22uF by 2, 3.0M by 2, 100 ohms by 2
Reset fix - 10M, 1N914 by 2, 2.2uF
Debugging - jumbo red led and 470 ohm resistor, also the front skid
Rough schematic...
.1u .----||---*------------------------. | _____ *-47K---. | front *-|1 |-*-47K-. | 8 Kbyte | 1381L debug *-|24LC |nc | | long-term | | | | LED *-| 65 |-------|-* memory | 1 2 3 .-------------*-|_____|-------* | | | .-|<|--470--. | | | .-----------*-*-----------|---------------|-|-------------. | | .--------------------*-|---------------|-|-------------|--* | | 2 ___ 1 1N914 by 2| | _____ | | | | | *--|___|-*-----------|-|-|1 |-------' | | | | | 3| `-|>|-*-|>|-* `-| 27p |---------' | | | |1381| 2.2u *-10M-' .-|-||--|----------------10K----|--* | |"L" | .--||--*-------|-| |nc CdS=photoresistor | | *--|----*--*--------------*-| |-----------------------|--* | *---CdS----*-------------|PIC |------------*---CdS----|--* | `---3.0M---|-*-----------|16C56|----------*-|---3.0M---|--* *------||-----' |.22u .----| |----. .22u| `----||----* | *------||-------*.1u | .--|_____|--. | .1u*------||----* | | / | .---|-|-----------' | | \ | | *--O O---100----' | .-|-|-------------' `--100--O O--' | | feeler | | | | 1N914 ZHB6718 feeler | | left .--2.2K--|-|-|-|----. by 4 ____ by 2 right | | | | | `-|-1K-*-|>|--|1 |-1K---. | *---------*--------|-|---|-----------| |---. | | | | | | `-1K-*-|>|--| |---|--|-----------* | `--2.2K--|-|--------' .---|____|---|--*--------. | | | | | | b r | | | | | `-----1K-----*-- motor --' | | .--2.2K--|-|--------. ____ right | | | | `-----1K-*-|>|--|1 |-1K---. | *---------*--------|-----------------| |---. | | | | `-------1K-*-|>|--| |---|--|-----------* | `--2.2K-------------' .---|____|---|--*--------. | | | | b r | | | `-----1K-----*-- motor --' | | left | | 1F 2.5V by 2 (AL gold) | `-------------------*------||----||------*-------------------' | + + | | 470u | *---------||---------* front | _____ + | O^O | |Solar| | | | `---|Cell |---|>|----' =L-R= -|_____|+ 1N5817 motors
The PIC chip must be programmed with robot code specific to this hardware.
I use the 16C56/JW windowed chip, a U.V. lamp for erasing it (watch your
eyes) and the Parallax programming
hardware and software. Once the software has been worked out, a permanent
version can be programmed onto a 16C56-RC/P one-time part. The 16C56 processor
contains 1K words of program store (rom) and about 25 bytes of usable ram.
If that isn't enough, the 16C58 version is code and pin compatable and
has 2K rom and about 65 bytes of ram. Other PIC families are electrically
re-writable and have more on-board ram, but I haven't tried them.
The design includes a 24LC65 electrically erasable programmable rom (eeprom) to provide up to 8K bytes of long-term memory for variables, data tables, etc. Software routines are used to access the memory. If long term memory isn't needed, the eeprom can be omitted, or just left out of the socket (even in it only draws about 5uA).
Currently there are two versions of the control code, a simple photovore simulation that does not use the eeprom, and the beginnings of a reenforcement learning version. Consider all code here experimental, provided for example.
Equivalent motor driver circuit...
.------6------*------ + b |e e| b .--|< pnp pnp >|--. 4 |c c| 8 | 5*--- motor ---*7 | `----|----1K-------* | *-----1K------|----' b |c | from --1K--*-----|>|--3-|< npn | PIC | 1N914 |e b |c outs --1K--|--*--|>|--1---|-----------|< npn | `--2.2K-. | |e `----2.2K--*---*------2------*------ gnd
There is a 1-1 "smoke" condition with this circuit, but that
is easily programmed out and even if it happens the capacitor will most
likely discharge before any damage is done. The numbers refer to the pins
of the Zetex part.
Port assignments...
Pin 17 - Ra0 - 24LC65 clock Pin 18 - Ra1 - 24LC65 data Pin 1 - Ra2 - from 1381 output Pin 2 - Ra3 - debug led out Pin 6 - Rb0 - Left photocell (analog in, time 0-1) Pin 7 - Rb1 - Left feeler (0 when touching) Pin 8 - Rb2 - Right motor forward (grounds blue when 1) Pin 9 - Rb3 - Right motor reverse (grounds red when 1) Pin 10 - Rb4 - Left motor forward (grounds red when 1) Pin 11 - Rb5 - Left motor reverse (grounds blue when 1) Pin 12 - Rb6 - Right feeler Pin 13 - Rb7 - Right photocell
Feeler inputs are active low, the photocells are read by discharging a capacitor by making the pin low and measuring how long it takes for the photocell to charge up cap and make the pin go high. The photocells should be matched using an ohmmeter and a sharpie pen so that they read equal resistance at a given light level. The cells on my prototype are trimmed to measure about 3K in room light. In sunlight this drops to around 400 ohms, charging the capacitor in a small fraction of a millisecond. A capacitor size of .22uF seems to match my photocells over most light conditions.
Images...
I used a short piece of spring wire on the back behind the motors to prevent tipping up. The compact arrangement has a cooler look, the problem is the weight on the front skid requires more force to overcome. I didn't try it with the LED skid, that would probably work better than the piece of wire I was using. The larger layout allows the front skid to lift off of the ground when moving forward, eliminating friction. A lesson from the photopopper.
Data dump interface...
The LED is a very useful debugging tool but there is only so much one can discern at once using short and long flashes. The following circuit allows the LED signal to be fed into a computer for more serious data inspection.
gnd >----------------*-------------. modular NPN |e | connector LED >-------4.7K---|< O <<< ------ gnd (25) |c .------O <<< ------ busy (11) Robot `--*--4.7K----O <<< ------ D0 (2) O parallel port
The learning version of the robot code contains a routine that dumps the eeprom, a logging program written in Qbasic can be used to read the data, must be adjusted to the speed of the computer. Not pretty. Definitely not for Windows.
I'm rewriting the rest of this page, as the project progresses things change. Particularly ideas about software. I might do it all over again before it's over, who knows... things are coming up that will occupy great portions of my time and these pages will probably go stationary for a spell so trying to push through as much as I can before real life takes over.
The latest version of the 2 code includes a routine that dumps the contents of the eeprom to the LED for analysis. Probably not a permanent fixture, but got to see what's going on. The interface hardware consists of a cable with wires that clip to the robot's LED pins on one end, the other end is an interface to a modular connector that connects to the PC's parallel port via the Parallax programmer cable. See above for schematic and logging software. The robot had a tendency to learn moves I'd prefer it wouldn't, like spinning around in circles, so I added a new learning condition - if the robot makes the same move too many times when the environment is changing, the move's confidence is reduced. I think it's working... the last memory dump is beginning to make some sense.
No changes to picbot2 program for now except for a few comments. An improved version is in the works, see the "back to simple" section below.
Finally got the 3733 solar cell, lots better now. I found a schottky diode and wired in series with the solar cell to prevent discharge when the cell is dark, originally I questioned the utility of this component but it does help out a lot. Without the diode the robot would typically have about 2.3 volts left the next morning, barely enough to keep the chip "alive". Worse, I discovered that if subjected to total darkness it would discharge much faster, even minutes. No big deal if it drains too low, but to recover the robot must be placed in bright enough light to overcome the PIC's "crash" current, about 1.5ma. With the diode it can easily survive overnight in any dark condition and recover even in ambient room light. Since the diode isolates the cell from the power rails, the solar cell charge or short current can be directly measured without unhooking or smoking stuff - under ambient light I get a charge current of 0.17ma. Not much at all but enough for it to go exploring once or twice an hour. A 100W bulb at about 8" increases the current to about 2.5ma. Real sunlight should provide 5-15ma of charging current (going by the cell specs, no sunshine today to measure).
The project is not complete yet, it's only a free-form-wired prototype. Electrical survivability is there but I still have to be careful with it to avoid bending things out of shape. It did survive a couple high speed leaps off of the coffee table in its early days before I toned down the h-bridges, but not without attention. What really needs to be done is to lay it out onto a printed circuit board and make it with ruggedness in mind, wire-art looks cool but won't stand the rigors of the real world (with cats and dogs and kids). This part will have to wait for more project time, but it would be cool to turn this into a kit or at least more complete plans. A whole new set of design issues that will take much more time to figure out, maybe next year unless someone wants to pay. One issue is the erase/programming method, it gets old. The 16F84 has attractive features like more ram and eeprom-based program storage (no UV!) but probably can't be programmed in-circuit and is rated for 4 volts minimum. Regular PICs are rated at 3 volts, I run 'em down to 2.5 volts but good chance the 'F84 won't go there. The Amtel part is beginning to look really good, the data eeprom is built-in and it can be programmed in-circuit with little more than a cable and some software. The problem has been getting one, none of my suppliers carry it. More for the future.
The following text details some of the design considerations for this project, and for low-power robotics in general. All of this is subject to being rewritten, call it rough notes for now.
Selection of processor...
Not many processors are suitable for solar power but the Amtel AT90S1200 and the Microchip PIC16C56 parts are good choices. The PIC series operates down to 2.5 volts (says 3 in datasheets) with an onboard RC clock of up to 4mhz. The Amtel part doesn't need erasing equipment and only simple hardware/software to program, has an onboard 64 byte eeprom and slightly more ram than a PIC16C56, specified to 2.7 volts and has a maximum 1mhz clock when running RC. Both components provide 1K rom for program storage, active current of < 2ma and a very low current sleep mode.
Personally I prefer the PIC, mostly because it's what I'm used to and I have the stuff. To get started using the PIC you'll need a part with the erasing window (I use a PIC16C56/JW), an UV chip eraser, programming software and hardware (I use Parallax, I like the instructions better but watch out using skips and multi-byte instructions). Set me back over $300 when I got into it, somewhat cheaper now. For someone just starting out, you might be better off with the Amtel parts since they don't really need a programmer except for a homemade cable to an IC socket. In-circuit programming should be possible. I have no experience with the Amtel parts beyond reading the data sheets, maybe when D.K. carries them. The PicBot could easily be an AmtelBot, of course the software would be totally different.
Using a processor on solar...
In the process of making this thing I ran into potentially severe problems, most relating to reset problems when running a processor chip from a slowly varying power supply, a solar cell charging a capacitor. The toughest problem was getting it to reset, a resistor/capacitor must be used on the reset pin (and a diode to protect the pin from the cap) to delay the reset until the processor has had time to clock-up and stabilize. Unfortunately, this delays reset when there is power, so an extra diode from the 1381 trigger is used to instantly charge the cap when it goes high.
Another problem is when a processor powers up, until it reaches a certain voltage or resets the pins are in an undefined state. Any highs applied to the bridge discharge the cap through the motor (or through the bridge in the case of 1-1), preventing chargeup. If the robot reached the voltage things go wacky in its discharge curve, very unpredictable things would happen, including suddenly flying off the table and through the air several feet! I heard something, looked down, robot wasn't there anymore... I finally fixed the bug by adding diodes and resistor dividers to the bridge inputs so that they wouldn't respond to anything below about 2.1 volts, above the wacky point.
Although not really a problem, even the 1-2ma drain of an operating PIC processor is too much to leave on all the time, it would quickly discharge the power capacitor in anything below bright roomlight. Therefore long delays (like when charging or even between moves) should be implemented using sleep mode to stop the processor clock. Only the watchdog timer runs while sleeping, bringing consumtion down into the dozen-microamp range. To operate well in normal roomlight, average consumtion should be less than 100ua total, including leakage of the power caps.
Careful regulation of the duration and duty cycle of the motor signals goes a long way towards efficiency. Pager motors can operate on as little as 1 volt, no need to apply the full voltage (and current) and possibly over-spin the motors. 1ms on and 1ms off (500hz 50% duty cycle) seems to work well down to the PIC's minimum safe voltage of 2.5 volts, 50 pulses lasting about 100ms is enough for a good healthy pop at low voltages without too much overspin when more fully charged.
Control considerations...
How do you control a robot? Depends on what you want it to do. Things all have to do are manage time, process sensory input, do something with the information and use the results to drive the motors. The easiest way to manage time and power at the same time is the sleep command. After a period the processor resets, starting execution from the normal entry but with flags set to indicate waking up from sleep. By using other flag bits the programmer can branch out of the startup code to wherever flow should go to.
Processing the senses is fairly easy. With the photocells, the idea is to short the cap, then countdown until the pin goes high or the count reaches 0, leaving count proportional to light level. Comparisons can deduce features from the raw left and right light levels, like light on left, dark on right, etc. Reading the feelers is trivial. All done, a byte should represent important features about the surroundings, enough to repeat should the robot find itself in a similar situation.
Driving the motors is another easy task, just timed pulses to the motor. One could get fancy and provide proportional steering etc but for the task at hand not much complexity is needed. 4 out pins to the h-bridges, disallow the 1-1 condition for each bridge, pulse the motors at about a 50% duty cycle (1ms on, 1ms off in my prototypes) for about 100ms, then wait (or sleep) until time for the next move.
To give the appearence of the robot being more active, I've programmed it to move several steps at a time. 15 was chosen because that many moves discharges the robot close to the minimum safe voltage when starting from the trigger voltage of the 1381 power sensor. For my robots I often add a "watch" phase that monitors the sense inputs and only comes alive when something changes, giving the robot more time to charge, therefore more total movement. At the end of several pops the robot checks the power, if sufficient it keeps going otherwise time to sleep and charge. Or it could resume the watching and wait for something to happen. If one wants maximum movement, leave out the watch function and just move when there's power.
Here's the basic code structure so far...
startup if wakeup from sleep then goto rest initialize stuff for the first time popping = 0 rest if popping = 1 then goto nextpop if power is good then goto watch sleep (resumes at startup) watch temp = environment environment = processed senses if temp <> environment then goto wakeup sleep (resumes at startup) wakeup count = 15 (number of pops for awake cycle) poploop [anything here that reads environment and sets action] move temp = 50 (length of pop) drvloop motor pins = action wait 1ms motor pins = off wait 1ms temp = temp-1 if temp > 0 then goto drvloop popping = 1 sleep (resumes at startup) nextpop popping = 0 (and gets routed to here) count = count - 1 if count > 0 then goto poploop sleep
The original code used a hard-coded delay before nextpop, wasting power. It is more efficient to sleep between motions as depicted here and implemented in the picbot2 code. I used two different watchdog timer delays, a long one for when charge-sleeping, and a shorter delay for when poping. Don't forget to clear the watchdog timer when processing for a long time, like in the motor drive loop. A convenient place for this is in the time-delay subroutine since everything calls it.
So much for the easy part. The difficult part is translating the sensor readings into appropriate motor signals, this is where versions of the PicBot diverge. My initial tests merely assigned hard-coded motor outs for different sensory conditions to simulate a photovore. No learning, no memories in eeprom, but highly effective for basic control...
poploop environment = senses action = forward if environment = light on left then action = popleft if environment = light on right then action = popright if environment = feeler on left then action = turnright if environment = feeler on right then action = turnleft if environment = both feelers then action = reverse move ...
Many other instinct-based control schemes are possible, including predefined sequences of moves for certain conditions instead of strictly look-move-look-move. The both feelers condition could trigger a backup and turn response. To give the robot some unpredictability and help find solutions, some of the moves can be made randomly. Switching between photovoric and photophobic modes would probably produce interesting behavior.
Learning...
Now for the fuzzy part. At its most basic level learning is determining if a situation is good or bad, then modifying the memories of actions that led up to the situation. Or modifying the entire system until it happens to produce good results if you really want to simplify, but good luck converging. My tests of evolutionary recurrent networks were not fruitful, just chaotic. Conventional neural networks are often too complicated to fit into the limited confines of solarable processors, and they don't work that great even on large computers. I've found several simple solutions that do run on tiny processors. The simplest is to replace the hard-coded actions with memories (variables), then when something doesn't go right randomize the memory corresponsing to the previous move. A denser and ultimately more capable memory method is to allow multiple inputs to select different memories by using the processed input bits (environment) as the memory address. Here's the basic idea...
while popping read senses into environment evaluate success of last move if bad if confidence > 0 decrement confidence memory(address) = action/confidence else if confidence < 3 increment confidence memory(address) = action/confidence address = environment action/confidence = memory(address) if action not valid or confidence = 0 action = random confidence = 0 move robot according to action
This is similar to the algorithm termed "beta class" by David Heiserman in the book "How to build your own self-programming robot". It is not as fast to learn as a simple priority system, since it has to figure out what to do with left feeler with light on the right separately from left feeler with light on the left. The robot's behavior is determined by the conditions used to evaluate the success of the last move. Moves that result in feelers touching or other undesirable environment conditions are flagged as bad, this encourages navigation skills. To keep the robot moving forward when not impeded, reversed motors in an otherwise good environment result are bad. To keep the robot from spinning in circles, many identical moves in a changing environment is discouraged. To help the robot learn to be a photovore, an extra reward (confidence increment) is given to good moves that move towards the light, but no punishment if it doesn't. Refer to the program code for the complete algorithm.
The search for other learning methods...
The basic memory/confidence method works best when there aren't too many memories. The failing is the robot must visit every environmental condition in order to determine a correct response via trial and error. It seems unfair that the robot must relearn a simple feeler reaction because the lighting is different. I was experimenting with a couple of enhancements that would expand the memory space and help generalize memories but bugs intruded. Simpler is better, and for what it is the simple algorithm works - the robot figures out what to do when it hits stuff, and eventually learns to use light cues to navigate around stuff. What more could a bot-maker want? A rudimentary sense of self for one thing, the present creature is purely reactionary (even if with learned responses) and hasn't the slightest clue where it is, has been or is going. Don't expect that to change for some time. (self-aware code is extremely difficult to write:)
Still, there must be something not so mechanical that naturally generalizes but simple enough to run on a PIC. One possibility is a simulation of a neural net that stores connections and weights in the eeprom. One must be careful though, the eeprom is a slow device (10ms per write) and limited to about a million writes. The simple "beta" algorithm writes at the most one byte per cycle, and only when it runs into stuff. A write every 5 seconds on average (for example) translates to about 2 months of continuous operation (*). Being a solar powered device that mostly sleeps it would probably take years to wear out just the section of the eeprom being used. It wouldn't take much to program it to select a new page should the current page return an error.
I could afford somewhat more eeprom activity but probably not the level of updating that goes on in most of the neural network simulations I've seen. Not to mention the complexity of programming them. I'd prefer the robot just "get it" then not change anything unless it has a reason. Bumping into a wall is not sufficient reason unless the response was not effective, and then maybe it would try again before messing with the network over what might just be a bump on the table. Neural networks? Maybe if they were allowed to train until they work then fixed to keep from wearing out the eeprom. Perhaps a library of networks could be aquired and stored, then selected from for varying situations.
Another possibility is evolved neural networks. I did some experiments with simple state machine networks where bits defined the connections between the neurons, however these had a strong tendency to learn the training sequence rather than the associations so I didn't go anywhere with them. I wonder if I discounted them early, a sense of sequence is lacking in typical approaches. I found that "evolution" didn't necessarily require many members as is usually the case, simply randomly flipping bits and restoring the previous network when the changes don't work is enough to find a solution, provided the problem isn't too difficult. Might take 1000 cycles but that's no more computation than 10 members converging in 100 generations. That's a lot of popping though, old simple methods master the environment long before a "shake 'em 'till it works" network converges. But what about the quality of the information gained? Straight environment-memory mapping (Heiserman, perceptrons, etc) can at best react to a situation with an appropriate move. A state machine network is a chaotic thing, it's difficult to tell exactly what it has learned and even harder to determine how or where the memories are stored. It just does. It is tempting to code something simple to see what it does. Only just for fun, I don't want to be disappointed if all it does is dance.
(*) It should be noted that for the currently specified 24LC65 eeprom, 1M writes applies only to the last two pages, the rest is rated at 10K writes. Perhaps a 24LC08 eeprom with 4 pages with 1M endurance would be a better choice. Also, the endurance rating is for writes to each location, not total writes. Updating the same memory every 5 seconds would be an extremely heavy learning load for the basic learning scheme but with alternate learning techniques it might come to that or worse. A survivable 'bot with an eeprom memory should take steps to preserve successful "nets" and have an alternate (simple) learning scheme that runs entirely in ram for when the inevitable happens and the eeprom becomes a rom.
Back to simple...
I wonder if anyone reads this stuff... if it sounds like I'm confused, well I am but have no worry, it's a very recreational kind of confusion. A mind puzzle - determining ways to impart rudimentary intelligence to a mechanical body. There is no "correct" answer, just various paths to limited success. No matter what, it isn't going to be a breakthrough except as it applies to what it is, a tiny low-power limited resource robot. The only breakthroughs are getting the code to fit!
"Rudimentary intelligence" is totally subjective, means whatever you want it to mean but it implies some kind of ability to solve problems that confront it rather than just doing the hard-wired thing. Wiring up a random move selector to the feelers of an otherwise standard photopopper (maybe with the tiny Zetex H-bridges for reversing) would meet that definition. A hard-wired control scheme easily gets stuck if the hard-wired move doesn't work, picking random moves would have a much greater chance of eventually succeeding (perhaps use the standard response first, if that doesn't work then pick a random response). Ideas for a non-computerized popper in the thought stages, which I'd like to be just slightly smarter than the norm. Perhaps I should modify the non-eeprom non-learning popper code to simulate this "alpha" level behavior.
The simple confidence-based "beta" scheme goes beyond that since it can remember the moves that work and recall them directly. It may be purely reactionary, but does a good job of learning to navigate around things on the coffee table. It rarely gets stuck unless it runs up against physical limits (when the "tires" are less than 1/8" across, there are definitely limits, but in its designed environment, it will try moves until it is unstuck or runs out of juice). The basic algorithm isn't good at generalizing learned information, it must re-learn it for each variation of the situation, but at least it doesn't forget what it knows about something when learning about something else. I quick-coded a simple neural net with six inputs, four outputs, and weighted connections from the inputs to the outputs. I guess that would be a single layer perceptron? It works. It can even generalize. BUT... to learn anything useful one has to show it input/output pairs over and over and over. It will learn any particular association quickly, but at the expense of the other memories. Only after hundreds or even thousands of passes will it settle into something that satisfies all conditions. If ever, it can't solve xor problems at all. The old-fashioned confidence method doesn't have these problems, since it stores each memory in its own location indexed by the environment. It looks like simple neural nets are out, unless simplified to the extreme like a bit-net where only one input is active at once, equivalent to a "beta" method with only one level of confidence and very few memories. That would be a good approach for achieving learning without using the eeprom at all, but I think I'd rather have more.
I've tried several times to extend the basic algorithm to fix some of its failings, only to end up failing because of bugs. A warning to all who try this sort of stuff: be prepared to spend endless nights pouring over source trying to figure out what went wrong. Machine-coding is very tricky stuff, one misplaced bit and it all comes down. Finding the misplaced bits can be an exercise in frustration often best dealt with by starting over with the last good code. This time I'm going to add the enhancements one at a time to prevent unexplainable breakage. First step... expand the memory to 256 locations by adding in bits to represent the robot's last move. Only the reverse bits are considered, adding in all four action bits would increase the address space to 1024 locations which would exceed the high-endurance portion of the eeprom, not to mention taking a long time to explore and work out moves for. I chose the reverse bits to separate the memories since they indicate a problem situation that probably needs a separate memory. The code is running now, seems to be working ok. Slower to learn but that was expected. Next step will be get the flash-dump routine up in page 2 to make room for new code that will search the memory for similar situations if the current memory has no confidence. This should produce workable moves for situations the robot hasn't previously encountered.
Note for those who came here directly... this design has been improved into the PIC-Bot II. If building, consider making that one unless you really need 64K of eeprom as the low power '84 chip is better for this application and is much easier to program and develop for. Thanks everyone for reading my stuff! I appreciate the comments.
May 17, 1999... finally scanned images, sorry it took so long. He looks a little crooked these days, result of a 3' fall from the window a few months ago. Spat! Still works but it's a wonder.