MeetingNotes
Meeting 6, Nov 1 2007
- I found out some documents and sample program on interface between Erlang and C program. I have done some trials on them.
- We discussed the overall design. "Player" will be used to talked with Sensor/Actuator (or Stage, Gazebo simulator). C programs handle the sensor inputs, and make partial decisions. The Erlang program communicate among the processes, and make final decision from multiple inputs. It evaluates and decides which partial decision should be trusted.
- I'll start to do coding work.

Meeting 5, Oct. 25 2007
- Discussed one paper: "An erlang framework for autonomous mobile robots," Corrado Santoro, Proceedings of the 2007 SIGPLAN workshop on Erlang
- The paper describes a robotic control framework based on Erlang language. The majority of their control logics are implemented in Erlang -- Sensing, Resoning, and Acting modules. Only the "native layer" (talking to hardware directly) is in C.
- Erlang was originally developed by Ericsson, and is an open source language now. It's good at passing messages among light weighted processes (lighter weight than OS threads). It's basically a functional language and allows code updates on the fly. But it's not good at heavy computing jobs, like image processing. So, I'm thinking to try and use it to do the communications between our different modules - image processing, laser obstacle avoidance, etc.
Meeting 4, 18 Oct 2007
- discussed two papers:
"Attaining Situational Awareness for Sliding Autonomy" http://www.cs.cmu.edu/~reids/papers/SIA_HRI06.pdf
This deals with issues related to granting greater autonomy to teleoperated robots and giving the humans better situational awareness of the robot's environment so that requests for "help" from the robot can be responded to more quickly and accurately (requests for help == asking the humans for instructions on how to deal with a particular situation that the robot cannot handle autonomously). While the decision-makers in this case are humans, I think it's pretty obvious that there is a direct line from this situation to one in which the robot is completely autonomous (simply increasing the autonomy all the way up this sliding scale) and how the issue of situational awareness, as expressed in terms of human operators, is directly related to the issue of presenting enough information to an autonomous control program so that it can make correct decisions just as a human would.
"Qualitative topological coverage of unknown environments by mobile robots" http://researchspace.auckland.ac.nz/bitstream/2292/619/2/02whole.pdf
This may at first seem only tangentially related to our goals (we are not really concerned with creating a robot that traverses an environment, guaranteeing that it visits every part of the environment), but as you will see, the paper covers some really important concepts with regard to developing a robot that must figure out its position in an environment. If you consider the various decomposition methods in the context of avoiding obstacles / following a track, you will see just how relevant the paper is. The whole thesis is 214 pages long, but chapters 2 and 4 are directly related to our work.
- looked at Squeak
- Chris led the following discussion:

(Figure 1)

(Figure 2)
Summary:
The goal of my project is to create a data processing framework within Player/Stage which takes care of several of the most complex and time-consuming tasks they would otherwise have to deal with when programming a robot, allowing them to focus almost entirely on the physical design of the robot and the decision-making approach used to control the robot. The project, from now on referred to as the DPF (Data Processing Framework), is designed to do several things:
- Abstract away the algorithms necessary to meaningfully combine data from various sources (camera + laser, compass + GPS, sonar + laser + wheel encoders + bump sensor, etc.)
- Abstract away the mathematics necessary to deal with data on multiple levels of spatial locality (raw sonar data, sonar data as coordinates relative to the center of the robot, sonar data as coordinates relative to some fixed, earth-based origin, etc.)
- Abstract away the many history / cache / data storage techniques used for various sensor / locality combinations (memory buffer of last 20 sonar readings, complete world map populated with objects detected by various sensors, running double integration of accelerometer data for dead reckoning, etc.)
- Present all of the above as a unique graphical representation, providing the user with an intuitive hierarchy of "data nodes", "histories", and "decision makers", for which simple (or complex) decision-making code can be written to control the robot, plugged into any of the various places in the tree that the user desires. See Figure 1 for an example of a fairly complex tree, and Figure 2 for an example of what the tree would look like for the Stanford Stanley robot, which won the DARPA Grand Challenge.
Data Processing Framework:
The DPF is constructed of the following three types of elements:
- Data Nodes & Histories: Data nodes are places where one or more data sources coalesce (various algorithms are used within data nodes to combine said data) and can be accessed directly, fed into some type of history, or fed into a higher-level data node. For example, every sensor present on a robot would have a data node, which can be accessed directly to get raw sensor data, fed into a history which might provide a circular buffer of the last 20 readings (for example), or fed into a higher level node. A higher level node might represent a place where sonar data, laser data, and hard-coded data about the physical location of the two sensors on the robot are combined into a list of objects (and their locations relative to the robot) that have been detected in the current sensor sweep. This combined data could be accessed directly, fed into a history which uses statistical methods to infer a probability that the detected objects actually exist based upon the past few sensor sweeps, or fed into a higher level node. A very high level node might combine all the data from a sonar, laser, GPS, and compass, providing a list of objects (and their locations in the real world), which could be accessed directly, fed into a history which provides a complete map of all objects ever discovered, the current location of the robot in the world, etc.
- Decision Makers: Coded by the user (not included in the DPF), decision makers are pieces of code which "attach" to one or more data nodes and/or histories, make decisions based upon the data collected, and instruct the robot to take certain actions. These can be neural networks, state machines, simple reactionary controls, or whatever the user chooses. There can be any number of these connected to any DPF... the point of the DPF is to perform all of the various data massaging and processing that a user might desire, allowing him to focus on writing decision making code which makes use of that processed data. Every data node and history in the DPF has several standard methods of accessing its data (polling, subscription-notification, etc.), which are uniform between all data nodes and histories. The exact nature of the data provided by a given data node / history is explicitly defined, making it simple to write a decision maker which makes use of said data.
These elements are combined in the form of a hierarchical tree, where the lowest level of nodes represent raw data from the various sensors present on the robot, and the higher levels of data nodes represent various combinations of lower level data nodes. There are many ways to combine most data sources (using many different algorithms), so there might be multiple data nodes which represent a combination of the same lower-level data nodes. For example, imagine a four-node system, with two bottom nodes representing raw camera and raw laser data, and two higher level nodes - one node representing a combination of the two data sources for statistical reliability purposes (giving confidence levels based upon one sensor verifying or refuting the data of the other sensor), and one combining the data from the two sensors into a unified map of obstacles relative to the robot.
Meeting 3, 11 Oct 2007
Meeting 2, 10/4/2007
- We discussed these two papers:
- Fred also showed the work by U-Penn on bio-inspired climbing robot. It has arms suck into the wall, and can climb vertically.
Meeting 1, Sept 27 2007
- Fred and Haiyang discussed the two papers:
- Use Player/Stage/Gazebo as our simulation/control platform.
- Robot control agents should share info, know each other’s status, be highly reliable, tolerate single failures, still functioning when some modules haven’t been developed yet.
- To read:
- Search engines: Google scholar, UML library: Academic search, ejournal list.