Friday, March 31, 2006

Roaring Roboraptor

u0205159 Du Xing The Roboraptor measures about 80cm from head to tail and comes to life with realistic motions and advanced artificial intelligence. It has 40+ pre-programmed functions and comes with dinosauresque advanced artificial intelligent personality, realistic biomorphic motions, direct control and autonomous (free-roam) modes. It has three fluid bi-pedal motion: walking, running and predatory gaits, and comes with realistic biomorphic body movements such as turning head and neck and whipping tail actions. It even has three distinct moods! namely hunter, cautious and playful. It is able to autonomously interact with the environment, such as responding with mood specific behaviours and sounds. It also has mood dependent behaviour, and multi-sensors on its tail, chin, mouth touch sensors and head sonic sensors that allows it to responds to touch and sounds. With an infra-red vision system detects objects in path, or approaching. It has powerful jaws that play tug-of war games, “bite” and pull, with “laser” tracking technology: trace a path on the ground and Roboraptor will follow it. visual and sonic guard mode. It even responds to commands from Robosapien V2. Roboraptor will start to explore his environment autonomously in Free-Roam Mode if left alone for more than three minutes. While Roboraptor is in Free-Roam Mode he will avoid obstacles using his Infrared Vision Sensors. Occasionally he will stop moving to see if he can hear any sharp, loud sounds. After 5 to 10 minutes of exploration Roborapto will power down. We can also put the Roboraptor into Guard Mode. Roboraptor will perform a head rotation to confirm that he is in Guard Mode. In Guard Mode Roboraptor™ is using his Infrared Vision Sensors and Stereo Sound Sensors to guard the area immediately around him. If he hears a sound or sees movement he will react with a roar and become animated. Occasionally Roboraptor will turn his head and sniff. Roboraptor has Infrared Vision Sensors that enable him to detect movement to either side of him. However infrared functions can be affected by bright sunlight, fluorescent and electronically dimmed lighting. Upon activation Roboraptor will be sensitive to sound, vision and touch. If you trigger the Vision Sensor on one side more than three times in a row, Roboraptor will get frustrated and will turn away from you. This will also happen if you leave him standing with his head facing a wall. Roboraptor uses his Vision Sensors to avoid obstacles while wandering around. While walking he will not be able to detect movement so he will react to you as if you are an obstacle. Roboraptor can be guided around using “Laser” targeting. A green Targeting Assist Light from the remote control will make the Roboraptor move towards the light. Roboraptor’s Infrared Vision System and the “laser” targeting are based on reflection. This means that he can see highly reflective surfaces like white walls or mirrors more easily and at greater distances. Roboraptor also walks best on smooth surfaces. With its stereo sound sensors, the Roboraptor can detect sharp, loud sounds (like a clap) to his left, his right and directly ahead. He only listens when he is not moving or making a noise. In Hunting Mood when he hears a sharp sound to his side he will turn his head to look at the source. If he hears another sharp sound from the same direction he will turn his body towards the source. If he hears a sharp sound directly in front of him he will take a few steps toward the source; In Cautious Mood, when he hears a sharp sound to his side he will turn his head to look at the source. If he hears a sound straight ahead he will walk away from it; In Playful Mood When he hears a sharp sound to his side he will turn his head to look at the source. If he hears a sound straight ahead, he will take a few steps backward, then take a few steps forward. Roboraptor has multiple touch sensors which allow him to explore his environment and respond to human interaction. If you press the sensors on Roborapto’s tail , the Tail Touch Sensors with produce reaction varies depending on his mood; Pressing the sensor under Roboraptor’s chin activates the Chin Touch Sensor, which also produce reaction depending on his mood. There is a also a mouth touch sensor on the roof of Roboraptor’s mouth. In Hunting Mood, touching this sensor will trigger a biting and tearing animation. In Cautious and Playful Moods, Roboraptor will play a tug-of-war with whatever is in his mouth. You might wonder how we can control the Roboraptor’s Moods, it is done with a button on the remote control. Hunting Mood is the default mood that Roboraptor is in when turned on. It can also be set in the playful mood or cautious Mood. As mentioned above, the moods determine the way Roboraptor reacts to some of his sensors. In Playful Mood Roboraptor will nuzzle your hand if you approach from the side. In Cautious Mood, Roboraptor will turn his head away from movement to the side. In Hunting Mood, his reactions are much less friendly. Technology: biomorphic robotics Biomorphic robotics is a subdicipline of robotics focused upon emulating the mechaninc, sensor systems, computing structures and methodologies used by animals. In short, it is building robots inspired by the principles of biological systems. One of the most prominent researchers in the field of biomorphic robotics has been Mark W. Tilden, who is the designer of Robosapien series of toys.One of the more prolific annual Biomorphic conferences is at the Neuromorphic Engineering Workshop. These academics meet from all around the world to share their research in what they call a field of engineering that is based on the design and fabrication of artificial neural systems, such as vision chips, head-eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems. There is another subdiscipline is neuromorphic which focus on the control and sensor systems)while biomorhpics focus on the whole system. Other toys by wow wee and Mark Tilden: Robosapien and Robopet.

Thursday, March 30, 2006

RoboWalker - The solution to assisted walking

u0204699 - Ong Chin Soon RoboWalker is one of the main projects in the area of legged robots and powered leg orthotics that Yobotics, a cutting-edge robotic design, consulting, and research firm specializing in biomimetic robots, powered leg orthotics and force-controllable actuators, is currently engaging in. The main aim of the RoboWalker is to assist people suffering from weakness in lower extremities by augmenting or replacing muscular functions of the lower extremities. Possible causes of such disabilities come in many forms, including stroke, post-polio syndrome, multiple sclerosis and muscular dystrophy.

A power-assisted wearable device that could provide the leg strength, support and endurance for the elderly and those with effects of lower limbs weakening diseases, RoboWalker, if successfully developed, would be a state-of-the-art breakthrough in orthotic device, which currently consists of only passive device. RoboWalker webs the leg and foot in a series of artificial, exoskeleton springy tendons and muscles. The device is able to, through the numerous sensors, know exactly the user’s next course of action, and provides bursts of muscular energy through the brace when needed. Such burst of energy provides the necessary strength that could assist the user in accomplishing the task (for instance walking up the stairs or even standing up from sitting position) which he or she may otherwise have difficulty accomplishing. Nevertheless, just like its scaled-down cousins RoboKnee or RoboAnkle, RoboWalker is not useful for paraplegics (complete paralysis of the lower extremities), since the user must have the ability to first put the leg where it needs to go in order for RoboWalker to provide the necessary assistance to complete the motion. While the RoboWalker prototype has had rather impressive trial results, there are also certain drawbacks associated with it. Firstly, the batteries that power the device can only last for a relatively short period of use. Recharging the batteries or changing a new set would be required after about 30 to 40 minutes of wireless assisted walking. Secondly, while the final price of the actual device is unknown, the estimated cost in the range of $10,000 may make it prohibitively expensive for most people. The success of RoboWalker will see the change in lives of many disabled and elderly people. It will definitely replace the wheelchair as the preferred means of locomotion for these people. Not only will this innovation greatly reduce the inconvenience brought to people suffering from weakness in lower extremities, such breakthrough in technology would also bring about a huge cost savings to social welfare system, where the needs for modifications to improve mobility (wheelchair-friendly houses, stair lifts, car lifts, home aids etc) would be reduced.


Your eye in the sky - The Unmanned Aerial Vehicle (UAV)

Martin Wiig - s0500296 An unmanned aerial vehicle, or UAV, is an aircraft which is either self-controlled or remote-controlled. It can carry different payloads, such as sensors, cameras, radar and weapons, all depending on the use of the UAV, which may be quite varied. For civilian use, the UAV's have mainly been used for fun and recreation in the form of model planes, and also in research. In the military however, the UAV's have a role of ever increasing importance. From simple model planes used to train anti-aircraft gunners after World War 1, to the V1 bomb of the german Luftwaffe, to the complex, more or less autonomous aircraft of today, such as the Fire Scout and Predator. These are used for tasks such as reconnaissance, intelligence gathering, assesment of damage after a battle, target acquiring for artillery and ship bombardement and so on. They can even carry weapons to be delivered to the enemy, luckily still with a human hand on the trigger. Currently, at least 700 unmanned aerial vehicles are being used in Iraq. Three modern UAV's, two of them in use and one of them still undergoing testing, are described in this posting. All of them are part of the American RQ series.

RQ-1: Predator The RQ-1:PredatorThe Predator is a propeller driven air vehicle 27 feet in length and with a wingspan of 49 feet. It is able to operate for more than 40 hours at an altitude of up to 25 000 feet with a cruising speed of over 130 km/h and a range of 740 km. The aircraft is part of a system of 4 planes, one ground control station (usually a van) and a satellite communications system. It is equipped with a satellite dish for communicating with the ground control station, with infrared sensors, cameras and radar, and may also be equipped with weapons. The Predator has been empoyed by the U.S.A. in Bosnia since 1995, and it is also in use in Afghanistan and Iraq. It's main uses are, as with most UAV's, reconnaissance, but it is also capable of carrying up to 14 Hellfire missiles. The Predator became (in)famous when it was used by the CIA to assassinate six suspected terrorists in Yemen in 2003, the first ever attack by an unmanned aircraft outside a theatre of war. The Predator has a bright future in the U.S, as the Pentagon plans to buy at least 219 Predators over the next five years. RQ-2: Pioneer The Pioneer was an early bird among modern military UAV's. It began service as early as 1985, when it was employed by the US Navy. It had several teething problems though, facing for instance electromagnetic interference from the ships it launched from, leading to several crashes. A USD 50 millon research and development project was consequently launched, bringing the Pioneer to a level of "minimum essential capability". In spite of this, the history of the Pioneer is a history of success. It was used extensively during Iraq War 1, where it carried out reconnaissance, target acquiring and battle damage assessment missions, among others. It became famous when a group of Iraqi soldiers surrendered to the Pioneer, fearing the ship bombardement that usually followed an overpass. This was the first time human soldiers surrendered to a machine, and is thus a landmark (if a somewhat scary one) in the history of robotic warfare. The Pioneer is smaller than the Predator, with a length of 14 feet and a wingspan of 17 feet. It is a propeller driven aircraft and has a range of 185 km, a cruising speed of 120 kmh and operates at altitudes up to 15000 feet. The pioneer is also used by the Israeli.

RQ-8: Fire Scout

The Fire Scout is an unmanned robotic helicopter still under development and testing. It showed an amazing degree of autonomy when it, whitout interference from human hands, was able to land on an aircraft carrier moving at 27km/h. Landing at an aircraft carrier is known as the most difficult part of piloting a navy plane, as it requires the pilot to have very good reflexes to adjust to the ship as it pitches and rolls. The Fire Scout may thus also lead to the development of automatic landing systems for manned planes, surely a great relief for pilots.

In conclusion, it is clear that the future of unmanned aircraft vehicles is bright from a military perspective. But is killing and war all these machines can be used for? Certainly not. As the technology becomes more and more advanced, UAV's are more than likely going to enter civilian life to a higher degree. I can easily envision UAV's for use of monitoring rainforests and other endangered enviroments, for natural research, spraying of fields and so on.

For interested readers, Wikipedia offers several links to open-source UAV projects on the net. References: General information on UAV's: About Predator: About Pioneer About Fire Scout

Robot Doctor

By:WANG LIWANG "U0205321"
A few days ago, the local news from Channel5, 8 and U reported that a Taiwan Hospital bought a few Mobile Robot Doctors from UK and put them into operation for daily nursing and performing ward rounds. The robot is equipped with video camera and a LCD panel for display as shown in the photo. It is remotely controlled by doctor with joystick. The robot doctor itself does not provide any form of physical examination, but it proves a mobile platform for communication between doctor and patients. With which, doctors can do his/her routine at different places, as many doctors with specialize skill and knowledge are required to be in several places. The robot checked up on patients, asked them how they were feeling, inspected their surgical sites to ensure proper healing, and answered questions. There is no intelligent in the robot, a laptop with a web cam can eventually perform the same task. And even through there is a display video of doctor in the monitor, it can never give the sense of interact with real doctor. As a doctor form the hospital said, "Our robots would never replace all doctors on ward rounds, but they are a communication tool which allows a doctor to have direct contact with their patient". Such an application shows another potential of a robot being used in medical area. It may not have precise control like surgery robots, but it also makes doctors work easier and more convenient. Interestingly, in NUS control lab, we also have a similar robot, it’s created as a robot usher, with its facial display by LCD and sonar sensors, stereo cameras for path planning. It should be smarter than the robot doctor above, as it can follow people by navigate itself and avoid obstacle in its path.

[1] "Does Dr Robot usher in new era of metal medics?" --

[2] "First robot doctors start work in UK hospitals" THE GUARDIAN , LONDON Friday, May 20, 2005,Page 6

Wednesday, March 29, 2006

Kismet - the Face of the Future

U0204438 Huang Shichao Alvin Traditionally, autonomous robots have been designed to perform hazardous and/or repetitive tasks. However, a new range of domestic applications such as household chores and entertainment is driving the development of robots that can interact and communicate with humans around them. At present, most domestic robots are restricted to communicating with humans through pre-recorded messages. Communication amongst humans, however, entails much more than just the spoken word, including other aspects such as facial expression, body posture, gestures, gaze direction and tone of the voice. The Sociable Machines Project in MIT has developed an expressive anthropomorphic robot called Kismet that "engages people in natural and expressive face-to-face interaction". To do this, Kismet takes in visual and audio cues from the human interacting with it through 4 colour CCD cameras mounted on a stereo active vision head and a small wireless microphone worn by the human. Kismet has 3 degrees of freedom to control gaze direction and 3 degrees of freedom to control its neck, allowing it to move and orient its eyes like a human, which not only increases the visual perception mechanism but also allows it to use its gaze direction as a communication tool. Kismet has a 15 degree of freedom face that can display a wide assortment of facial expressions, as seen in the picture. This allows it to display various emotions through movement of its eyelids, ears and lips. Lastly, it also has a vocalization system generated through an articulatory synthesizer.In terms of the behaviour of the robot, the system architecture consists of 6 sub-systems: low-level feature extraction, high-level perception, attention, motivation, behaviour and motor systems. Using these systems, the visual and audio cues it receives are classified using the extraction and perception systems. Using the attention, motivation and behaviour systems, the next action taken by the robot is calculated and executed using the motor system. For more information, please refer to here Kismet exhibits several human-like behaviours, which are modeled on the behaviour of an infant, such as moving closer towards an object it is interested in by moving its neck or engaging in a calling behaviour to cause the object to move nearer. It also changes its facial expressions according to whether the visual and audio stimuli are causing it to "feel" happy, sad, etc. Videos of these interactions can be seen here. The use of infant behaviours are meant to simulate parent-infant exchanges to simulate socially situated learning with a human instructor. Kismet represents the next step in human-robot interaction, where robots will share a similar morphology to humans and thus communicate in a manner that supports the natural mode of communication of humans. This will lead to more intuitive and "friendly" designs for robots that will allow them to be more easily accepted by humans as robots become more and more ubiquitous in our lives. Link to Kismet homepage

Tuesday, March 28, 2006

Lego Mindstorms NXT, a kid's toy (available from August 2006)

U0303505 Pham Dang Khoa Introduction Smarter, stronger and more intuitive than ever, LEGO MINDSTORMS NXT is a robotics toolset that provides endless opportunities for armchair inventors, robotics fanatics and LEGO builders ages 10 and older to build and program robots that do what they want. Building upon the success of the globally-renowned Robotics Invention System, the next generation of LEGO MINDSTORMS makes it quicker and easier for robot creators to build and program a working robot in just 30 minutes. Simultaneously, new technologies and expanded sensor capabilities add a level of sophistication to excite and challenge more experienced robot creators. Technology The heart of the new system is the NXT brick, an autonomous 32-bit LEGO microprocessor that can be programmed using a PC, or for the first time in the retail offering, a Mac. After building their robots, users create a program within easy-to-use yet feature-rich software, powered by LabVIEW from National Instruments. Downloading programs to an invention is easy. Users with Bluetooth®-enabled computer hardware can transfer their programs to the NXT wirelessly, or anyone can use the included USB 2.0 cable to connect their computer to the NXT for program transfer. The robot then takes on a life of its own, fully autonomous from the computer. The inclusion of Bluetooth technology also extends possibilities for controlling robots remotely, for example, from a mobile phone or PDA. It was demonstrated, for example, how with a Bluetooth phone, it could direct the movement of one of the robots. Then it was showed how the robot was programmed so that when it moved and bumped into something, it would send a signal to his phone directing it to snap a digital photograph Feature highlights • All-new NXT intelligent brick • 3 interactive servo motors feature inbuilt rotation sensors to align speed for precise control • New ultrasonic sensor makes robots see by responding to movement • New sound sensor enables robots to react to sound commands, including sound pattern and tone recognition • Improved light sensor detects different colors and light intensity • Improved touch sensor reacts to touch or release and allows robots to feel • 519 hand-selected, stylized elements from the LEGO TECHNIC® building system ensure robot creations will be sturdy and durable while also looking authentic • Opportunities for physical programming of robots and interaction with robots during programming • 18 building challenges with clear, step-by-step instructions help acclimate users to the new system to create robots ranging from humanoids and machinery to animals and vehicles • Digital wire interface allows for third-party developments • Further, the robots are Bluetooth-enabled, meaning they can be controlled by, and can control, any Bluetooth device. Application: kid’s toy Mindstorms NXT is said to be aimed at children 10 and older but it's that obvious Lego is hoping the toy will actually appeal to adults.

Home Vacuum cleaning robots

U0308030 PHUNG DUC KIEN Introduction: Currently, there are many available home cleaning robots in the market. Each robot has its own strengths and weaknesses. I will give this exclusive chance to compare the products, to help the consumers differentiate between the robots and make their own choice. I-robot Roomba Roomba is currently the most popular cleaning robot Price: Roomba Discovery: $249 Technology: Using a front bumper sensor to detect obstacle Dirt-detection: Using sound feedback. It is a very clever technology, when the dirt is sucked into the box, depends on the frequency of sound detected, the robot can interpret the dirt density. The robot can hear the dirt!!! Automatically avoiding stairs, avoiding falling-off from high altitude using Infra-red sensors. Home-base: When it's done cleaning or runs low on a charge it will reliably return to the charging base if it's in the same room. Strengths: Dirt-detection, less noisy than conventional vacuum cleaner, good cleaning algorithm, can automatically return home to recharge Weaknesses: It will not be gently with your valuable furniture and pets because it will actual bump into them. The robot has difficulty when cleaning on the carpet Sharper Image eVac Price: $199 Technology: Similar to Roomba Strengths: Cheaper than Roomba Better cleaning power, thus noisier Weaknesses: It will not be gently with your valuable furniture and pets because it will actual bump into them. The robot has difficulty when cleaning on the carpet Very noisy as compared to Roomba The robot doesn’t have a function to return to home base station Applica Zoombot Price: $99 Strengths: Possibly the Cheapest Cleaning Robot available Can avoid stairs but sometimes it is stuck which 1 wheel hanging in the air if we let it run on the table. Weaknesses: Run very slow Poor functionalities Bad cleaning power Electrolux Trilobite Price: $1,799 Technology: Using ultrasonar sensors to detect obstacles, so that the robot will not bump into your furniture. Thanks to these sensors, it can differentiate between the objects and your pets. Magnets are everything to Trilobite: it uses magnetic strips for room containment, magnets in the base station that tell Trilobite where home is, and even magnets to hold the dust bin door shut. Electrolux warns that the vacuum may mistake a speaker lying on the floor for its base station since speakers use large magnets in their drivers. Trilobite took no interest in our large floor standing speakers, though. Strengths: Trilobite has fantastic performance. It has excellent cleaning ability, especially with pets’ hair. It can clean nicely on carpet. It can go back the charging station when the batteries are running low. Trilobite has a user-friendly LCD. Weaknesses: Very expensive. It is a high-end product after all. Reference:

Monday, March 27, 2006

NEC's Health and Food Advice Robot

u0303819 Ang Yong Chee Health and Food Advice Robot While most of the media attention goes to high publicized domestic robots like the Sony’s QRIO and the Mitsubishi’s Wakamuru , another Japanese giant NEC has announced that it has developed a new robot that is capable of tasting food .

Above NEC's Health and Food advice robot , Sony's QIRO , Mitsubishi's Wakamura (from left ) This robot with “taste buds” is a new feature added to the common existing ones like patrolling the home with built-in cameras for detecting intrusion , recognizing faces and voices to communicate with its owners as well as to provide information and controlling home appliances . Officially called the “ Health and Food Advice Robot “ and dubbed as the world’s first partner robot with a sense of taste by its creator, NEC System Technologies . The robot is able to analyze the food and ingredients and also to perform food tasting . In other words , the robot is able to break down the compositions of the food and also differentiate among the variants of a particular food . For example, the robot is able to determine the amount of fat composition in a cheese and possibly what kind (brand) of cheese it is . On top of it , the robot can also offer advices to its user if it is given its user’s health profile . The advices include how to improve the user’s health and eating habits based on the robot’s analysis of the user’s diet . Technology behind the “food tasting” The robot has an infrared sensor equipped to one of its arms . This robot utilizes a property called “ spectrum reflection ratio” to determine the composition of the food . Varying wavelengths of infrared light are beamed onto the food, where the spectrums of the reflected infrared lights are analyzed to determine the actual contents in the food. Now the food compositions in term of water, protein and other molecule types have been determined by the infrared sensor. Given the robot’s database of the food compositions, the robot is able to identify the food if it exists in its database or “remembers” it if it isn’t. ( picture left shows the infrared sensor on its arm ) References : Photo sources from :  NEC : Mitsubishi Wakamaru : Sony QIRO :

Surveillance Robotics: Using mobile phones to control robots for household security

U0307641 Low Youliang Freddy In the present day, using mobile phones has become the norm. While people are out at work, they may be constantly worried about the condition of their household; be it thieves breaking in or an accidental fire being set. By combining these two pressing issues together, the idea of a mobile phone controlled robot is being explored upon. In fact, companies such as Fujitsu Laboratories Ltd has developed such a robot known as MARON-1. This kind of robot can be remotely controlledby mobile phone to operate home electronic devices and monitor home security. This robot is equipped with a wide range of functions, including telephone, camera, remote control, timer and surveillance equipment. With these features, it is forseen that MARON-1 could be used for monitoring homes or even offices at night or for checking up on persons requiring special care and monitoring. Maron-1 consists of a drive mechanism, a camera that can rotate left, right, up, and down, a programmable remote to control home electronic appliances, and a PHS communication card that, together with specially designed i-appli (*1) software, enables the robot to be operated remotely by mobile phone. With this special feature to be operated by a mobile phone, the robot can take pictures and relay them to the phone's screen, such that the owner can check conditions at home. In addition to this, the robot is not static. The owner can give precise commands for moving the robot forward, backward or turning in a desired direction. Also, by storing the home's layout in the robot's memory, the owner can give the robot a destination, and it will automatically navigate to that point, avoiding obstacles and maneuvering over door saddles and other surface gradations along the way. Alternatively, a pattern may be established for it to patrol a designated course. Images sent by the Maron-1 can also be used for specifying a destination. The robot's infrared remote control capability can be used to operate appliances such as air conditioners, televisions and VCRs. With today's technology, i believe that it is also possible for the robot to control devices using "bluetooth" technology, giving it a wider range of operation. By positioning the robot one or two meters from a spot the owner would like to monitor (for example, the front hall or a window) and turned appropriately, MARON-1 is able to detect anyone or anything entering its field of view. If it does detect an intrusion, it can sound an alarm and call a pre-set number. The robot can also be scripted to take specific actions at specific times. For example, it can be used as an alarm clock or timer, or it can be programmed to take pictures around the house at pre-set times. With its built-in PHS capability, the robot can be used as a hands-free telephone. Frequently dialed numbers can be stored in its memory for one-touch dialing. Other commonly performed actions may also be assigned to function buttons. References: [1] [2]

Saturday, March 25, 2006

Robots in Mining

U0206584 Vidhya Ganesan


“In the ten years between 1988 and 1998, 256 miners died and over 64,000 were injured in mining accidents!” “World metal prices have been falling for decades due to increases in efficiency. If a mine is unable to become more productive, it will go out of business!”

Yes! The vision of robotic industries, science fiction only a few years ago, is poised to become reality in the global mining sector, driven by the twin needs for safety and efficiency.

CSIRO's deputy chief executive for minerals and energy, Dr Bruce Hobbs says research teams at CSIRO are trialling and developing a range of giant robotic mining devices, that will either operate themselves under human supervision or else be "driven" by a miner, in both cases from a safe, remote location. “It is all about getting people out of hazardous environments," he says.[1]

Robots will be doing jobs like laying explosives, going underground after blasting to stabilize a mine roof or mining in areas where it is impossible for humans to work or even survive. Some existing examples of mining automation include

· The world's largest "robot", a 3500 tonne coal dragline featuring automated loading and unloading

· A robot device for drilling and bolting mine roofs to stabilize them after blasting

· A pilotless burrowing machine for mining in flooded gravels and sands underground, where human operators cannot go

· A robotic drilling and blasting device for inducing controlled caving.

Robots must demonstrate efficiency gains or cost savings. The biggest robot of them all, the automated dragline swing has the potential to save the coal mining industry around $280 million a year by giving a four per cent efficiency gain. Major production trials of this robot are planned for later in the year 2000.

Unlike their counterparts commonly found in the manufacturing industry, mining robots have to be smart. They need to sense their world, just like humans.

"Mining robots need sensors to measure the three dimensional structure of everything around them. As well as sight, robots must know where they are placed geographically within the minesite in real time and online," says Dr Corke. "CSIRO is developing vision systems for robots using cameras and laser devices to make maps of everything around the machine quickly and accurately, as it moves and works in its ever-changing environment," he says.

Dr Corke insists that the move to robots will not eliminate human miners, but it will change their job description from arduous and hazardous ones to safe and intellectual ones.

The Technology :

Example 1: RecoverBot [2] (used in mine rescue operations) , a one hundred and fifty pound tethered rectangular unit, has two maneuverable arms with grippers and four wheels that support an open box frame with power units, controllers and video cameras separately built with their own individual metal armor. Lowered down the target shaft to prepare a recovery, the telerobotic eyes "see" for the surface controller and the arms move the body into a second lowered net by lifting and dragging. An "aero shell" protects the robot during the lowering operation from a winch to protect from falling debris, and then removed when bottom is reached. Then RecoverBot performs it’s mission, observed from two points of view-the overhead camera used by current mine rescue to image deep shafts-and the robot, who’s video are the mine rescuer’s second view. When the mission is completed the robot is then raised to the surface after the victim and overhead camera is withdrawn.

Example 2: Groundhog [3], a 1,600-pound mine-mapping robot created by graduate students in Carnegie Mellon's Mobile Robot Development class, made a successful trial run into an abandoned coalmine near Burgettstown, Pa. The four-wheeled, ATV-sized robot used laser rangefinders to create an accurate map of about 100 feet of the mine, which had been filled with water since the 1920s.

To fulfill its missions, the robot needs perception technology to build maps from sensor data and it must be able to operate autonomously to make decisions about where to go, how to get there, and more important, how to return. Locomotion technology is vital because of the unevenness of floors in abandoned mines. The robot also must contain computer interfaces enabling people to view the results of its explorations and use the maps it develops.The robot incorporates a key technology developed at Carnegie Mellon called Simultaneous Localization and Mapping (SLAM). It enables robots to create maps in real time as they explore an area for the first time. The technology, developed by Associate Professor Sebastian Thrun of the Center for Automated Learning and Discovery, can be applied both indoors and out.

"Mining can be a hazardous job. Getting robots to do the job will make mining safer and ensure the long-term viability of the industry".





Friday, March 24, 2006

U.S. Air Force testing robots as security guards

u0307999 ZHAI NING U.S Air Force is taking steps to try out new robotic security guards to replace the human duties in order to save potential losses of life and improve efficiency. There are two major robotic is on trial at EGLIN AIR FORCE BASE, Florida. mini-robot deployed

One robot being tested is a Jeep-size, four-wheeled vehicle that has been equipped with radar, television cameras and an infrared scan to detect people, vehicles and other objects. It carries a breadbox-sized mini-robot that can be launched to search under vehicles, inside buildings and other small places.

Another robot is fashioned from an off-the-shelf, four-wheeled all-terrain vehicle, giving it added versatility because a human also can ride it like a normal ATV. Both vehicles can be remotely operated from laptop computers and can be equipped with remotely fired weapons, like an M-16 rifle or pepper spray.

Those robotics can be programmed to patrol and in case of suspicions, they will sound loudly to tell potential threats, and interesting can use different languages to question the intruders. But the Air Force still use a human to be always around because the military doesn't want to give machines complete discretion. Very practical example of what the situation is about the security robotic. But it also pose a problem to us, how much we can believe a robotic to secure us, and how about the social issures with the deployment of the robotics?

Why we need Homeland Robotic Security Systems?

u0307999 ZHAI NING Do you still the scene of 911 attack? It is an unexpected attach which demonstrated how a small group of people can have a huge destructive power on once invulnerable to large-scale terrorists attack, U.S. And this events unveiled the limitless possibilities for more to come if we do not secure ourselves well. This introduction of weapons of mass destruction furthers the ability of a small group with relatively limited military assets to wreak havoc on asymmetrical warfare or terror. The principle defense against surprise attacks of this or any other nature is advanced warning, which inherently depends upon the timely and accurate collection and assessment of appropriate information. Thus robotic security system comes. We need advanced detection scheme to detect the undetected by human beings. We need the advanced assessment technology to identify the different scenario. Because of the characteristics of changing parameters, we need the homeland security robotic system to be very adaptive as well as having the ability to learn from past. Thus for the new kind of homeland robotic security system, it is relatively hard to achieve a satisficatory result as other robotics can achieve.

Surveillance Robotics: Using colors to analyse

u0307999 ZHAI NING Among the possible applications which have been foreseen for Service Mobile Robots, surveillance robots (i.e., robots designed to replace human security guards in making rounds) are becoming more and more popular, as witnessed by the many systems commercially available and the growing interest in the research community. In this blog, I just concentrate one area: How is a surveillance robltic can detect unexpected changes in the environment? Well I found out one method from a paper written by Mattia from university of Genova, Italy, which detailed the use of the following mechanism: The robot “looks at” the environment through a TV camera; next, it compares what “it sees” in a specific moment with what “it should see” at that same location. In particular, an approach is proposed for images comparison which requires to find color clusters in the color histograms corresponding to the images to be compared: by analyzing the color clusters in the two images, the system detects similarities or differences between them and consequently deduces if something has changed in the scene. Well that seems like an computer vision problem, and for sure it is. According to another report written by Paolo in the same university, he propose a “ad hoc” algorithms have been implemented for color clusters comparison. The detail is covered in Proceedings 2003 IEEE International Symposium on Computational Intelligence in Robotics and Automation July 1620, 2003, Kobe, Japan. I want to comment on the simply idea: it is only a simplification of the human unconscious way of dection. But it helps a lot for robotic. Can the robotic community do better by consulting more life sciences research result? Maybe it is a good way to start think about.

Rock-a-bye baby robot

Yan Meixian U0205044
Making robots look like humans or animals is nothing surprising anymore. Living things especially human beings have such a complex structure that it is a never-ending challenge to reproduce the various aspects of it such as its emotion, its facial expressions, its flexibility in physical motion etc. So it seems to me that most of the time, the robots that makes the news headlines are those which mimic the human beings in the most life-like manner. A very good example is Asimo. I guess a reason why people choose to produce life-like robots is because they would like the robots to be their personal companions. Facing a robot which indicate some form of emotion seem more pleasant than facing a lifeless and hard machine, especially if that robot was meant to be a toy. iRobot Corporation produced a super-realistic interactive life-sized human baby toy, known as My Real Baby (MRB) with the purpose of giving young children a very stimulating play experience. The MRB comes with high-tech animatronics and emotional response software, so each doll has the ability to change its face in numerous different ways allowing it to convey its emotions to the child playing with it. The first generation of the MRB, known as IT, could already shake hands with people, smile if anyone takes his picture, and gets frightened if people get too close to him.
The second generation, BIT, used behavioral language and could also sense if he’s upside down or not and express that he doesn’t like the feeling of being inverted.
The MRB features a range of real and virtual sensors which iRobot did not give any details about. The previous generation used 5 electric motors and had orientation, reed switches, a microphone and a light sensor but the MRB did not seem to have all of these. The doll changes its emotion using the Behaviour Language Operating System (developed by Rodney Brooks of the the MIT AI Lab and Cog fame). The behavior model that of a real baby very well. If it is not fed it, it gets hungry and cries. If it is fed, burped and rocked to sleep, it will stop crying.
Here are other amazing stuff that the MRB can do. It can change its facial expression rather adeptly like moving its lips, cheeks and raising eyebrows. It can also blink and suck its thumb and bottle. The doll has a great collection of different baby noises and words and can randomly combine them. What I find more amazing is that the longer you play with the doll, the more it starts to piece sentences together in a coherent fashion, just like a real baby.
A reviewer of the doll played around with it. The doll could giggle and gurgle and respond well to tickling and being burped on its back. If left untouched, the baby would sleep till it is given a gentle nudge to wake up. The reviewer then tried to be “mean” and make the doll cry, but he did not succeed since the manual stated that the MRB does not respond to aggressive behavior.
It seems quite an ideal toy for young children since it does not encourage violence. But the idea of having a life-like baby may have a bigger scope than just being toys. If we let wild ideas run, maybe one day, the robotic baby may be so realistic that couples who can’t have kids may choose to adopt a robotic baby just to get an idea of what parenthood is about.
Article referred from:

Thursday, March 23, 2006

Security Robots Aren't Science Fiction Anymore

U0205183 Teo Yinling Security and surveillance robots have evolved much since they were introduced in the early 80s. As technology improves over the years, new security robots can do much more and some are even replacing humans. There are some which are not just security robots only. The world's first autonomous security robot was developed at the Naval Postgraduate School- the ROBART I. It had collision avoidance sensors, but this research platform had no sense of absolute location within its indoor operating environment, and was thus strictly limited to navigating along preprogrammed patrol routes defined by the relative locations of individual rooms, periodically returning to a recharging station by homing on an optical beacon. From a security perspective, the platform could only detect suspected intruders, with no subsequent intelligent assessment capability to filter out nuisance alarms. The second-generation follow-on to ROBART I was ROBART II , which also operated indoors, incorporating a multiprocessor architecture and augmented sensor suite in order to support enhanced navigation and intelligent security assessment. The addition of an absolute world model allowed ROBART II to: (1) determine its location in world coordinates; (2) create a map of detected obstacles; and (3) better perform multisensor fusion on the inputs from its suite of security and environmental sensors . This last feature facilitated the implementation of a sophisticated threat assessment algorithm that significantly increased the probability of detection while virtually eliminating nuisance alarms. In 2003, Wakamaru, an experimental Linux-powered humanoid robot was developed by Japan's Mitsubishi Heavy Industries; The 3.3 foot tall, 60 pound robot is described as the first human-size robot capable of providing companionship or functioning as a caretaker and house sitter. The battery-operated robot moves about on wheels and recharges itself when its batteries run low. Wakamaru has an internal software platform that was developed using MontaVista Software's embedded Linux distribution and tool suite. Its project manager attributed the choice of embedded operating system to its "sophisticated software base" and "superior networking capabilities," which enabled the team to "focus on the complex programming that makes this new robot human-like." Additionally, the robust operating system also played an important role in enabling Wakamaru to service a household 24 hours a day. Some of Wakamaru's main difference with other security robots are: (1) A robot that is friendly to people and useful for your life at home. (2) Lives with family members. Speaks spontaneously in accordance with family member's requirement. Has its own role in a family. (3) Natural and enriched communication in accordance with life scenes. Recognizes approximately 10,000 words required for daily life and provides topics in accordance with life scenes and communicates in a friendly manner using gestures. (4) Autonomous action in accordance with its own rhythm of life. The robot has its daily rhythm of life, moves in accordance with time and purpose, automatically charges its batteries and lives with family members. Wakamaru was introduced into the Japanese market beginning in 2004, priced at about 1 million yen, which is approximately US $14,250. The latest security robot would be the one by Hitachi. It is a proto-type security robot on wheels that stands 22-inches tall. Hitachi's robot has a parascope camera that protrudes from its head and though it appears awkward it can watch for suspicious changes in the landscape and send photos to a guard. The camera can swivel so the robot doesn't have to do an about face to look around. The prototype which basically has a laptop on board for a brain can figure out the shortest path to a spot. When it gets there if something is missing or moved it can send back images to a security guard. The "Star Wars"- looking robot still has problems with battery life and recognizing objects smaller than a soda can, but Japanese electronics maker believes the roving robot, which can figure out the best route to a spot on its own, is better than the stationary cameras now common for security. Universities and even Honda Motor Corp. have developed robots that can recognized its location and objects moving, but many such robots require marks on the floor to pick up on its cameras. Another way robots figure out where they are is by global position system, using satellites. At the present time Hitachi has no plans for commercialization of its prototype security robot, but in the future that could change and the future's probably not to far away. References:

Just need the Sun & Wind to travel around Mars

U0204912 Lin Zhiqiang Explore Mars with Robots There are a number of limitations and factors that are affecting exploration in distant planets and stars like Mars. One of them is battery life. NASA scientists are always looking for sustainable and renewable engergy sources to run their probes and vehicles that roam around space. The 1997 Sojourner rover was only able to move about 100 meters on Mars in one month and the Mars rover planned for 2003 will travel only about one kilometer during the entire mission. This can be hardly be called Mars exploration when only such a small area of Mars can be covered. As shown above, surely batteries are not the way to go for space exploration. How about utilizing what is abundant out there - Sunlight and Wind! A sun-seeking rover and a probe shaped like a giant beach ball are among the newest robots being tested for their potential to explore the Martian landscape. Basking in the Sun A robot called Hyperion weaves through hills and around obstacles, all the while avoiding shadows as it calculates a path that maximizes its exposure to sunlight, which it relies on for power. Named for the Greek word meaning "he who follows the sun," Hyperion was designed and programmed to always point its solar panel directly at the sun. "What makes Hyperion different is that it is more aware of its surroundings. We have added intelligence to this machine," said engineer Ben Shamah of the Robotics Institute at Carnegie Mellon University in Pittsburgh, Pennsylvania. For the past two and a half weeks, Shamah and his colleagues have been testing Hyperion in one of the bleakest and most remote places on Earth—Devon Island, north of the Arctic Circle. Devon Island, part of the Canadian territory of Nunavut, is uninhabited. Its cold, barren, and rocky terrain is the closest simulation of Martian terrain on Earth. The other advantage of the location is that it has 24 hours of sunlight—perfect for testing this solar-powered robot. A milestone for Hyperion was completing a 24-hour, 6.1-kilometer (3.8-mile) circuit over hilly, rocky terrain and returning to its starting point with fully charged batteries. One concern had been that the robot would travel too fast or not keep its panels directed toward the sun, causing it to run out of battery power before completing its mission. Although Hyperion must be further developed before it will be capable of exploring Mars, said Shamah, the current model has demonstrated that "sun-synchronous navigation can provide an unlimited source of energy enabling a rover to explore vast areas." Under good conditions Hyperion plods along at about 30 centimeters (one foot) per second. This is a fairly good speed compared to previous battery dependent models. Now lets look at the other energy source that is abudant in Mars and scientists can harness it for their robots' power. Blowing in the Wind Scientists at the Jet Propulsion Laboratory in Pasadena, California, have designed an explorer that can hurtle across the Martian landscape at up to 40 miles (64 kilometers) per hour. The speedy explorer is a huge inflatable ball about six meters (19 feet) in diameter that is propelled entirely by wind power. "Mars is very windy but the air is thin, which is why we need to use a big ball—it acts like a huge sail and catches a lot of wind," said Jack Jones of the Jet Propulsion Laboratory, who is leading the research. The "tumbleweed ball," as it is called, carries a payload of scientific instruments at its center, which are held in place with tension cords. Among the instruments is a radar for detecting underground water and a magnetometer for determining the location of tectonic plates. Neither task can be done from an orbiting craft. The roving ball is also outfitted with cameras, which sit in recessed nooks on its outer surface. When the rover enters an interesting area that merits a closer look, scientists can send a signal to partially deflate the ball and stop it from rolling. When the ball has finished taking measurements, it can be reinflated to enable it to roll onward. The idea of inflatable rovers is not new, Jones pointed out. Beach ball-size tumbleweeds, about 0.5 meters in diameter, were tested in the past but abandoned because they frequently became wedged between rocks, making them impractical as remotely operated rovers. The idea of using a bigger ball came when Jones and his colleagues were testing a robot with three spherical wheels in the Mojave Desert. "One wheel just fell off and the wind caught it—carrying it up and down, and down and up, the sand dunes," said Jones. "It must have gone about a mile before we could catch up with it and stop it." A tumbleweed ball six meters in diameter is unlikely to get stuck or wedged. It can easily roll over the mostly small rocks that litter the Martian terrain. One concern the research team has about the tumbleweed ball rover is its lack of controllability. "I hate to admit it, but these rovers are pretty dumb," said Jones. "They just go where the wind takes them." The research team is developing a steering mechanism, which involves shifting the payload off center to force the ball to the left or the right. Next year, if funding permits, Jones wants to test the tumbleweeds on Devon Island. "There are not many places on Earth where we can just unleash giant tumbleweeds and let them roam around," he said. Jones has no doubt that the tumbleweeds will dramatically increase the potential for Mars exploration. The 1997 Sojourner rover was only able to move about 100 meters in one month, he noted, and the Mars rover planned for 2003 will travel only about one kilometer during the entire mission. "But tumbleweeds," he said, "could potentially cover hundreds of kilometers per day." References: Hyperion solar powered robot: Tumbleweed wind powered robot:

Wednesday, March 22, 2006

Snake-Arm Robots – They reach the unreachable

U0205081 Chow Synn Nee
Having read an earlier blog entry posted by Shaohua about the Snake-Like Robot – KOHGA, it sparked off my interest about such robots because having a snake-like structure is bound to have many advantages such as easy mobility into awkward spaces and flexibility of motion, which is clearly favourable to applications in maintenance and repair. True enough, as I searched through the Internet for more similar designed robots, I came upon this snake-arm robot, which is designed by OCRobotics Ltd. The functions that the snake-arm robots are designed to fulfill include: precise positioning to place and remove fixtures and sections of pipe, tack welding and inspection.
There are two types of snake-arm robots: the Overhead Arm and the Underneath Arm and the name indicates the direction of access to the working area. These robots are already gaining recognition for their immense applications in the industry and OCRobotics has even won a significant contract to supply these two types of snake-arm robots to a Swedish nuclear power utility for maintenance of their nuclear reactor.
So how exactly does a nuclear reactor look like? Can’t normal robots perform the job as efficiently? Here is a picture showing a nuclear reactor. It’s not difficult now to understand why a snake-arm robot is needed isn’t it? The snake-arm robots gain access to extremely confined area of the nuclear reactor in order to conduct maintenance tasks. On top of this, when the path to the work site is tortuous, dangerous or unpredictable, the flexible delivery mechanism of the snake-arm robot would be able to manoeuvre around the obstacles. The arms are flexible, continuously curved and have multi degrees of freedom. What truly amazes me is that they even boast low cost disposable or ‘sacrificial’ arms of which would, as the name suggests, be sacrificed should it actually find difficulty retracting itself from the work site.
The Technology
Snake-arm robots are slender and have continuous variable bend along their length. It is, in fact, a little like the human spine, as it is comprised of a large number of “vertebrae”. It is a “tendon-driven” arm with wires terminating at various points along the length of the arm. The result is that the curvature and plane of curvature of each segment can be independently controlled. The robots use what is called wire drive and they have no motors in the arm itself, which leads to a compact low mass design. A control software will then calculate the necessary lengths of all the wires to produce the desired shape.
The operator uses a joystick to drive the tip and the computer does the math to make the arm follow. This tip-following capability enables a snake-arm robot to avoid obstacles and follow its tip into complex structures. The operator can also control the arm in tool or world space, whilst continuing to avoid obstacles. This technology is very scalable and the arms can be designed to be large and durable as well as small and compliant. The snake-arm robots can combine a significant payload with precise positioning and still snake into awkward spaces!
Impressive isn’t it? I guess snake-like structured robots would find more ways to contribute to the industry in the near future because as of now, there have already been similar applications in exploration and industrial/ service robotics to say the least. Reference: Official Website of OCRobotics Ltd Picture of nuclear reactor courtesy of Wikipedia.

Tuesday, March 21, 2006

Introduction to the Assistive Robot service Manipulator (ARM)

eng10822 Loh Khai Choon The Assistive Robot service Manipulator (or ARM), which is also known as “Manus”, is a 6+2 Degree of Freedoms robot. It main role is to assist disabled people with a severe handicap at their upper limbs. By using input devices such as a keypad (4x4 buttons), joystick or another device that is attached to a non-disabled body part, the manipulator can be controlled to grasps objects with its gripper, and compensates the user for their lost arm and hand functions. It can be mounted onto a wheelchair, or a mobile base so as to aid in the carrying out in daily activities, be it at home, office or even outdoors. In addition, the ARM can be conveniently folded in beside the wheelchair when it is not being used. World wide user studies have shown the immense benefits of the ARM for its users. They become more self- supportive and increase their participation in society. Therefore the quality of life increases significantly. In addition, costs on professional nursing assistance can be removed as this allows the users to function much more independently during any time of the day. There are many uses of the ARM in daily life, which includes having and preparing meals, operating household appliances and many others. It can even do housekeeping chores such as doing the dishes and watering plants. Therefore, the activities which can be carried out using the ARM are almost limitless. Components of the ARM The ARM consists of a gripper with hinged grips to ensure firm grasp of the objects, and is the most versatile part of the ARM. Its maximum spread is 9 cm, and has a damping force of 2 kg. The three hinged fingertips are covered with anti-slip material to ensure that it can grab almost any object without slipping. The clamping force can be controlled by the user, and in case of emergencies, can be manually opened without any damage done. Next, would be a LED matrix display which will provide the status of the ARM to the user. It consists of a 5x7 LED matrix and a buzzer, and the screen informs the user about the operation mode the ARM is currently operating in. In case of emergencies or errors, the buzzer beeps to warn the user. In addition, there are several input devices available input devices. It will be selected based on the capabilities of the user. They include a joystick, a keypad with 16 buttons, switches/buttons (such as head-switches, foot switches) or a single switch. In addition, corresponding software will be supplied, and additional input devices are currently under development. There are two modes of control, either “Carthesian” and “joint” mode, both allowing intuitive and effective manipulation of objects. Integration with wheelchair If the ARM is to be mounted on the wheel chair, it can be mounted to either side of the wheelchair, and depends on the users’ preference and the available space on the side. THE ARM has been successfully integrated to many models of the wheelchairs, which include the following: Invacare: Storm XL and Storm3 Scandinavian mobility: Moover 302, Moover 895 Ortopedia: Elro 90 compact Garant: 24S Meyra: Optimus and Sprint and Genius and 3.422 Huka: Max, Skwirrel Wheelbase: Belize and its elongated version Safety In addition, the ARM incorporates several features, which include slip-couplings to limit the maximum exerting force of the ARM in case of collisions, limiting motor current, continuous monitoring of the velocity, positions and acceleration of the ARM, and many other safety features. Transparent mode In addition, the ARM can be remote controlled using a PC via the transparent mode. It can be used autonomously or remotely in case of hazardous situations. For such situations, it can be mounted on autonomous platforms and accessed via the PC. Using this transparent mode, it can move much faster, and it can also move in two or more axis simultaneously. References:

Eldercare Robotics: A Personal Mobility Aid using shared control systems

U0205119 - Ek Li Ling Some elders may suffer from conditions such as macular degeneration and cataracts, which result in vision impairment. Others may find it difficult to move around due to loss of motor coordination. There are many types of existing aids available to help elders maintain their independence and mobility, but they provide limited help. Canes and dogs help guide the way, but they do not provide a stable platform to support the elderly in the event of a trip, while although walkers and scooters may have stable platforms, they do not solve the problem of vision impairment. A walker has been designed by the Medical Automation Research Centre (MARC) at the University of Virginia which is aimed towards helping elders who are both vision and motion-impaired. This three-wheel walker uses sensor technology to detect obstacles and also a navigation system which controls the steering well when collisions with obstacles need to be avoided. The walker is an example of a passive robot. Passive robots are robots that can steer their joints, but require a human to move them. The walker is a passive robot because it can only change the direction of its front wheel and cannot move forward on its own. Such a feature was decided because users need to feel that they are in control instead of being led by, or having to chase after the walker. The walker also features rear-wheel collision detection sensors that will notify the user when the back wheel may get caught by an obstacle. Three-wheel walkers have a wide wheelbase, and commonly face such a problem, especially through doorways. The walker aims to be an intelligent shared control system, which can be seen as a combination of two control systems that are inputting control signals to the walker frame. One system is the human, providing the moving force and steering the walker towards their destination. The second control system is the steering by the walker to avoid obstacles and prevent falls. The intelligent walker allows the users varying degrees of control, according to the user's abilities and needs. This varies from complete control to collaborative control with the control system. Such a feature is aimed at allowing the users to feel in control, while at the same time increasing the ease and safety of their travel. It seems rather dangerous for elders to have a walking aid which can change direction on its own. Therefore, it is very important that the control agent does not result in big movements unexpected by the user. For example, if the walker’s movement differs significantly from the user’s expectation, the walker might cause the user to fall, instead of preventing a fall! In addition, users expect that the walker will maneuver in the way that they push it. The control system must not make the user feel as if the walker is unresponsive or non-obedient. Therefore, an important design issue is the ability to integrate the two control systems to provide a smooth sense of shared control. Deciding where and when to vary the degree of autonomy given to the control agent is an active area of research. The navigation system will automatically vary the level of intervention according to the user’s wishes and the prescence of obstacles in the environment. The control agent has to be submissive, but alert. Currently, when no danger or difficulty is detected in the environment ,and the user and the control agent express different desires on how to steer the walker frame, the control-agent must give in and the walker will give full control to the user. When the walker attempts to steer around objects or between spaces, the walker is controlled by a mix of user and control agent commands. Finally, the walker control agent only takes full control when a collision or drop-off is imminent. This walker is still in the development stages. Research is being done on different control systems to improve the user’s experience of this product. The success of this product depends largely on how well the control system reads the user's intentions and collaborates with the user. Hopefully, this product can truly extend the mobility and independence of our elders. References Medical Automation Research Centre, Projects, Eldercare Robotics: A personal Mobility Aid. Wasson, G., Gunderson, J., Graves, S. and Felder, R. 2001. Effect Shared Control in Cooperative Mobility Aids. FLAIRS ’01: 509-513.

Snake-Like Robot - KOHGA

U0205332 Yang Shaohua
The following news comes from Daily Times (Monday, September 22, 2003), which introduce a snake-like rescue robot - KOHGA, invented by japanese reseachers. It could be used for many complex environment where human being can not reach easily or safely. Actually, a similar snake-like robot was also used in looking for the survival after the attack on New York's World Trade Center on Sept. 11th,2001.
One technical paper named "Evaluation of Snake-like Rescue Robot - KOHGA for Usability of Remote Control" is also uploaded on IVLE for further reading.

TOKYO: Researchers in quake-prone Japan have developed snake-like robots capable of literally worming their way through the rubble of earthquake-destroyed houses to find trapped survivors.
A prototype dubbed Kohga — named after one of the two legendary schools of ninja spycraft — has been developed by Fumitoshi Matsuno, a professor at Tokyo’s University of Electro-Communications’ Mechanical Engineering and Intelligent Systems.
Snake-like robots “can go into narrow places and their long and thin bodies can disperse the weight to prevent a secondary collapse of wrecked structures,” Matsuno said. The two-metre (6.6-foot) long robot is propelled by ridged belts like tank tracks. “The survival rate of trapped people is said to plunge after 72 hours, and rescue operations are a race against time,” Matsuno said. Kohga can be dismantled into about 10 parts for transport to disaster sites. “It is very important that rescue robots can be transported easily,” Matsuno said.
Research in many Japanese universities is geared towards earthquake rescue and recovery due to the archipelago’s vulnerability to large tremors. The most recent, a huge earthquake that struck the western city of Kobe in January 1995, killed more than 6,000 people. Most died of exposure of asphyxiation after being trapped under rubble.
The radio-controlled Kohga, which is equipped with a built-in camera for remote monitoring, is battery powered or can be fitted with a power cable in situations where the radio signals could interfere with other rescue services.
Another snake robot, Moira, was developed by Kyoto University systems science associate professor Koichi Osuka. Its name is an acronym of “mobile inspection robot for rescue activity” but is also that of the Greek goddess of fate.
Moira’s body has two sets of caterpillar tracks on either side of its sections that push it thorugh rubble. “The Moira can move more powerfully if it pushes against rubble from the side,” Osuka said.
The robot is 1.4-metres long and weighs 18 kilogrammes (39.6 pounds). “Snake-shaped robots are especially suited to rescue operations in Japan or other Asian countries since wooden structures leave little space to go through when they collapse, unlike buildings made of stone or mud,” he said.