Fire Fighter Project

Page Modified on: 05/07/04

Home
Lab 1
Lab 3
Fire Fighter Project
Knight Rider Project
MCP Project
Egghunt project
Photo Gallery

 

Fire Fighter Project

Trinity College in Hartford, CT holds a contest every year. This contest is for fully autonomous robots. The task of the robot is to find a candle in a maze and put it out. There are several rules in this contest. Please click on the following link to see the complete book of rules. Click for complete rules

For humans finding a lit candle in a small maze is not a big deal. For robots on the other hand is a fairly difficult task. It is really hard to recognize patterns that human brain would pick up in matter of seconds. For robots this task is a lot harder. During the course of this project I stepped into the arena and tried to imagine what would it be like to be a robot. So the task becomes finding the candle with the limited number of sensors and memory.

That’s because robots today are limited by the accuracy of their sensors and the limited amount of memory they have to remember the last task or the sequence just executed. Behavior based robots “plan” the next step based on current conditions and compiled data from the previous steps. These computations take time. There are other robots that have no memory of the previous steps they performed. These are purely reactive robots. These robots just react to current sensor readings. Behavior based robot is slower, reactive robots are faster. There are different applications that each robot is suited for. The ideal is to combine these two different methods to create a robot that uses the best of both worlds.

Picture of the maze at UML

First the task looks fairly straightforward and simple. Traverse the maze, then find light, find candle and then blow it out. Then as a bonus return home. My initial try was to divide the arena into grids so I know (and the robot “knows”) where I start from, and the current location and the path all the way back home and the possible locations of the candle. This method is called dead reckoning. I think somebody thought about that when the arena was designed because there was no way I could figure out a system to divide the arena into squares. Some measurements I tried that were unsuccessful: 16”x16” grids were too small for some too large for other rooms. I also tried 17 and 18 inch squares too.

The other problem with dead reckoning is that the robot’s wheels and gears can slip when the robot turns so the shaft encoder will be slightly off after each turn – this will result in about a 5% addition to the error each time the robot turns – so after a few turns the robot is not nowhere near where it “thinks” it is. That is why mapping with robots is such a big problem. To correct this I would have to check and sync in with a known landmark so if the robot gets off its track it can correct for it.

My next attempt to solve the problem was to follow both walls and look for breaks in the wall or openings. Every time there was a change in the consistency of the wall like an opening or a door to a room it was sensed and a “new square” started. This attempt was fairy successful, but the robot got “confused” before the entrance to the room #3. Sometimes the sensors picked up the small wall at the entrance and sometimes they did not. So it would either consider that “block” either one or three squares. One continuous block or one block with a wall, then a block with two walls and then a other block with one wall and an opening. So this method did not work 100% either.

My next attempt was to follow both walls and navigate the arena. That seemed like a good idea until I actually tried it. The initial idea was to have the robot follow both walls at an equal distance. Using a normalizing function with the readings from the ET sensors the robot would navigate between two continuous walls – but every opening represented a challenge – first over compensating for distances was a problem, then not compensating enough. I realized that this approach was not the best one, but this seemed the most reliable with the technology that I got so I decided to go with it.

I have been trying to reuse as much code as possible or to have blocks of code tested separately from each other – so I knew what was working and what needed improvement or more work. I have reused the base of my robot from the Egg hunt contest. I reused the light seeking from our previous assignment. I also reused the wall following.

An attempt to identify each room was a fairly easy task. All I needed to do is have a sonar sensor pointed inside the room and measure left, right and straight. If the return values were in a certain range I could detect a room. Example: the first room was small so all the return values were proportional. The second room was about the same size as the first. The way I differentiated between these two rooms: if I took a left in it was room #1 if I took a right in it was room #2. Room # 3 was the largest. Room # 4 was the longest. Some problems with this room recognition: position of the robot matters. If the robot is in a “bad” angle the readings will off. To compensate for that the room for error is quite high. The great reward for this would be that the robot would “know” exactly where it is and exactly how to get home. This can be used after the candle is put out or to search each room without getting lost. The code for this implementation was quite complicated and I because of the lack of time I had to scale it back to be able to finish the robot in time.


What I ended up doing:

I implemented a right wall following. That was a bit hard because initially the robot was too fast for an enclosed area like the maze. If the robot got to close to the walls then the ET sensors I used would return a faulty value and the robot would drive right into the walls. I adjusted the speed of the robot so it was slower and I adjusted the correction so it was not overcompensating if it got just a bit to far or close to the wall.

The next problem was handling the 90 degree turns. It took me a fair amount of time by trial and error to make the robot turn a perfect 90 degrees to the left and right. Later on as I added more sensors, batteries and other accessories to the robot the weight increased and I had to redo all that again. I implemented a function that took two values as global variables one was the cornering power level and the other was the straight power level. This way I would have more control over the turning radius and the speed of the robot. *This was a good idea but I did not account for the time it turned and that bit me in the end. I will have more on this later.

For now the robot is able to navigate inside the maze – it covers the whole area except for room #1. There is no wall that would lead into room #1 because that room stands out like an island. To deal with this special case would prove to be a really hard problem. The fact that the robot has no memory of the past actions and has a limited amount of sensors made it really hard to deal with this. I was trying to make each room check a similar routine or state. To solve this problem I started the robot up in the middle of the maze following the right wall. When the robot hit the white tape marking the entrance of the first room the robot would stop and check for the presence of a candle. Just to be sure it would check 60 times. If there was no candle the robot would turn around drive straight until it found a wall and would follow it to the next white marking. If there is a candle present in the room the robot would switch modes – it would search for the light. When the intensity of the light gets below a certain value it would drive straight. I did this to optimize my light finder. I learned that if one of the two light seeking diodes will read a really low number the candle was straight ahead.

The robot would drive straight until it hits the white mat. The white mat is a marker added by the organizers. The candle is always on the white mat and the robot has to be on the mat to be able to extinguish the candle. After the robot gets on the mat the fan would be turned on. I turned the fan on to minimize on the time the fan needs to spin up and deliver the maximum airflow. I would like to mention that while this idea saves time, it might not be that most efficient way to put out the candle with a fan. If the robot is lined out perfectly with the candle sometimes the moment the fan is turned on it creates a little air “jolt” that might put the candle out. I chose to put the fan on because my robot would scan back and fourth while searching for light so as soon as the robot is on the mat I’m close enough to “accidentally” put the candle out.


Modular robot

My robot has two parts: the base – includes the lower part of the robot: the drive gear, motors gears and wheels, and the top part: sensors and the handy board. I learned that it is easier to do modifications to the robot if these two are separated. The added bonus is that I could have two completely different drive platforms and all I would have to do to switch them is to pop one off and put the other on. All I had to worry about was to have the motor inputs line up. If I wanted I could have a backup drive train just in case the main one fails – the possibilities are endless. 

Picture of the four wheel drive platform separated bottom and top

I liked the four wheel drive robot’s maneuverability, tensional strength and the ease of programming. I disliked and hated the added weight of the extra motors and logo cross braces. I think I overbuilt the robot for the egg hunt contest. It was a 100 times sturdier that it should have been (my original plan included disabling other robots by ramming them, but contest rules did not allow for that) and that added a lot of extra weight.

I had an idea that if I gear my robot differently and use smaller wheels I might gain extra speed. I implemented. I used two 24 teeth gears one attached to the motor the other attached to the wheel. It was a little jittery but I blamed that on the extra speed gained. I was wrong about that. It was my code. Because my robot is a plain reactive robot – meaning that it doesn’t store any of the previous data and steps, it just reacts to the current sensor inputs – the extra speed gained over shot the corrections and even if just a small correction was needed it did one major correction.

Imagine if a robot can only turn in 30 degree turns and it needs to turn 35 degrees. First it will turn 30 degrees, will check the sensors if it is at the correct position. The sensors correctly will tell it that it is not at the correct position it still needs to turn. So it turns one more. Now the robot has turned roughly 60 degrees. Now the sensors will tell it to turn back the other way so it does. This bad code or design results in an oscillating motion that will scrub off speed and it is wasteful to the battery and moving parts like wheels and gears.

To correct this I did two things: I take the sensor readings into count. I implemented a function that normalizes the sensor values. If the robot only needs to turn enough so it lines up with the wall it only turns just about enough to turn to the right direction – either away or towards the wall – so it lines up correctly. I also implemented a B brain. This brain is basically just a counter that gets reset if more than two functions called repeatedly. This B brain will be triggered if two functions will be called back to back repeatedly. Example: left, right, left, right…(the pattern here is oscillating between two stages – like when stuck in a corner.) And it will turn out randomly and then resets itself and restarts the procedure again.

That idea was correct – except: now the robot it was too fast to react to small sensor readings. The robot was traveling through the arena (just a fancy name for the maze we tested in), but it was overrunning the lines that marked the entrances to rooms. Those are important – for two reasons: points are given for rooms tested and after the candle is extinguished the robot has to be able to return home but it is not allowed to enter any room during the trip home.

So I went back to the drawing board and redesigned the base of the robot. I replaced the gears with different size gears. I also replaced the tires. The redesigned robot is considerably slower than the previous version. It also lost that “cute” kind of look. I tend to spend a lot of time on the aesthetic look of the robot. I remember spending hours designing a safari off road vehicle looking robot for the wall following exercise in Robotics I. As of right now my robot doesn’t look really good, but I will improve that with time.

I ran into some major problems with my design. The robot kept blowing Lego motors. I did notice a pattern though: it only blew the front motors. It did not happen every day – but it happened about once a month. I have been testing since mid December so the number of motors that I blew grew to 3 blown motors I had to do something about it.

First I removed two motors and installed a single coaster wheel on the robot. The basic idea of the coaster is that it has a small wheel (or wheels) that is either dragged behind or pushed by the driven wheels. This wheel is attached so it can turn wherever the force trying to turning it. So I though it is a simple solution to solve my motor problems. To turn right I will apply power to the left wheel, to turn left I will apply power to the right wheel to go straight or backwards I will apply equal power to both wheels either in a positive or a negative direction.

I built the base of the robot and it looks like this:

The design flaw of the coaster became apparent when I had a hard time programming for a simple wall following. Before I modified my robot I had two wheels on each side driven by the two motors. After a turn was completed and the robot was required to drive straight forward all I had to do is apply power to the motors. With the modified robot that had the coaster after completing the turn and when the robot was ready to drive straight the coaster was still at an angle that the robot turned and it steered the robot in a bad direction. This new emergent behavior did some funny looking circles during wall following. Sometimes it would follow the wall perfectly then after completing a 90 degree turn it would spin 360 degrees then do more wall following. I knew that was really unpredictable and wasteful. Each turn took about 1.5 – 2 seconds. Even if it did only 5 spins like that during the competition that would cost me at least 10 seconds – but the unpredictability would probably be a lot worse. Just like the commercial says: there are things that money can’t buy and for everything else there is … The unpredictability would have been a major factor that I was not willing to accept. So I went back to the drawing board again and redesigned my robot.

With everything I learned in mind I knew I would use four wheels and four motors.  I will not compromise – for two reasons: I took the four motor design this far and my ego does not want to let me be proven wrong and the second reason: the robot’s name is Bobcat – like the famous skid steer tractors – so I would like to at least resemble the original concept of the robot.

I knew what was blowing the motors in the previous version. I had spaced the wheels in a prefect or almost perfect square. The problem was that the wheels I used had an insane amount of grip for their size and I know they were far beyond the Lego motor’s capacity. With all the friction present while trying to turn sliding the wheels sideways the Lego motors did not have enough torque to spin the tires (or at least keep them rolling) and sometimes they actually stalled. Electric motors like to run at a constant speed – and they burn out if they are overstressed (obviously). What’s really damaging is when they are running and they stall - that causes a short circuit inside and it really heats the wires up and lowers the life of a motor.

To lessen the stress I moved the motors and the wheels closer to each other. This greatly reduced the balance of the robot. The center of gravity was high and the footprint of the robot was small. But the turning radius and the motor problem is finally solved. I geared the robot 1:1 and it had an amazing speed and a great cornering capability. (Just for reference by this time the contest is about a week away.) Testing the 1:1 geared robot it became apparent that the robot is way to fast for the sensors. Sometimes it would shout over the white lines marking the entrances to the rooms – so I turned the speed down.

 When I showed the robot to my professor Dr. Fred Martin he told me that the gearing was wrong and that was the cause of the problems I had with the motors. So he quickly suggested change the gears on the robot to a different ratio lower than 1:1. And he disassembled the robot’s base to put in a different set of gears. The problem was that the gears did not line up with my carefully positioned motors and wheels so my robot got disassembled again. He also suggested to cross brace the Lego’s to increase the strength of the structure of the robot.

Picture of the robot disassembled

I reassembled the robot and now it has a 4:3 gear ratio, it is cross braced and ready for action

Modular code for the modular robot

I spend a great deal of time designing my robots so the code that runs them is easy to write. I admit I don’t consider myself a code guru and don’t like coding that much. I would rather spend more time on making a robot as mechanically “perfect” as possible and code the problem.

I usually write code in pieces. I test them separately then I put them together one at a time. The advantage of modular code is that I can change one function without affecting another. The drawback is that my functions usually don’t communicate with each other – that would be helpful in some environments.

Wall following: the idea is that if the robot follows a wall in a maze it will be able to get through it. It is not the most efficient way to traverse a maze, but in this case actually helps. Rules state that the points are awarded for surveyed rooms. So traveling through and “checking” each room is an advantage.

I chose to do right wall following – there is no advantage to picking the right versus the left wall. As I discussed above the wall following code was a bit choppy. The robot had a lot of oscillation, but I fixed that bug. The robot will only turn as much as it is needed to line up with the wall.

Ramps and wall following: a few interesting problems come up with the ramps added. First my front sensor was mounted low in the front of the robot – that sensed the ramp and perceived it as a wall ahead. Instead of driving up on it – it turned 90 degrees. Also coming down the ramp when the angle was right the front mounted sensor would pick up the ground and it would interpret it as a wall ahead.

After I raised the sensor the robot was able to drive up on the ramp. But if I angled the ramp so its slope would be downward a wall ahead sometimes the robot would hit the wall because the momentum gained from the downhill slope. I solved that by increasing the front sensor’s sensitivity. Now the problem is when there is no ramp and the robot picks up the wall from too far and it turns a bit too soon. There is a range that is a good compromise between safe for cornering on ramps but not too sensitive during normal travel.

What would I do differently?

I would use different fan – using a beat up old standard fan was a cheap solution. The 80 mm box fan is cheap but doesn’t have the cubic feet per minute of a squirrel cage fan or a model airplane propeller. It to get enough cubic feet per minute out of a computer fan I fed it 18 Volts. I used 2 pieces of 9 Volt batteries in series coupled to a relay. The relay would be activated by a motor output. The added weight of the batteries and the relay was a huge penalty that affected the turning radius and the battery consumption of my robot. But even with this much extra hassle the box fan needed about 3 tries to put the candle out. I realize that this is not the real life way of putting out a fire but I think this is a cheap way of doing it for this contest. For the real world my robot could carry a real fire extinguisher to put the flame out.

My robot’s drive train was designed for real world firefighting in mind. The robots in the contest had flimsy wheels they did not seem really practical for a real world application. Unfortunately this backfired on me. I tested my robot in our crappy arena and it worked flawlessly. I qualified my robot in a “dirty” arena. Before the contest started the organizers repainted the floors of the arena. I was not aware that the traction of the floor will influence the wall following method of the robot. <<Continue>>

Conclusion:

I realize that robot design is such a new science compared to math. We only have a short history of about 30 years. And we still have a long way to go. This contest shows that computers, sensors and robots are not equipped to survive in the real world yet. Even in a small and controlled environment such as the maze we needed additional helping guides like the tape along the entrances to the rooms otherwise the robots would not have been able to do certain tasks. Also when designing robots for certain tasks it helps to put ourselves in the robot’s position and use only the limited amount of information that is available. I will take as part of my personal quest to improve this unreliability of the robots and make them

 

Home | Lab 1 | Lab 3 | Fire Fighter Project | Knight Rider Project | MCP Project | Egghunt project | Photo Gallery