Meet the Team: OSU Personal Robotics Group

Bill Smart, head of the OSU Personal Robotics Group team, explains how their self-driving wheelchair will give dignity and independence to wheelchair users

Self Driving Wheelchair

 

How does your self-driving wheelchair work?
We’ve developed a kit you can add to a standard power wheelchair. The kit comprises a couple of sensors – laser range-finders – a computer and other electronics. Using the sensors, the chair can build a map of the environment and show it to the user, who then selects a place to go. The chair then decides how best to get there, and drives to that point.

What impact could your invention have for people and society?
We can dramatically improve the quality of life of hundreds of thousands of people all over the world with this technology.
Many people in powered wheelchairs, especially those with very limited ability to move, rely on other people to drive them around. Even if they have some ability to drive themselves – by using eye-gaze and a computer interface for example – this is a slow, laborious process. It ties up their eyes, which they also use to generate speech with an assistive device. This means that when they are driving they can’t do anything else: they can’t “talk” or even look up.  They must focus on the driving interface.
Our system gives back independence and dignity to these people, allowing them to look at their surroundings and interact with others as they move about. Since the chair knows where it is, we can also make the interfaces that wheelchair users use to control their environment (the lights, TV, for instance) simpler, only showing them the controls for the devices that are physically close to them, making the devices much easier to use.

What aspects of the project are you working to improve in time for the finals?
The trickiest part of the development so far has been getting the robotic package to interact with the wheelchair and make it move around. The wheelchair is not designed to be controlled by a computer.
We’re currently working to improve the navigation of the chair and make sure it works outdoors. Up to now, all of our testing has been indoors. As the competition final is outdoors, that has been a bit of a challenge for us. 

What field trials have you done so far?
We’ve been testing some of the technology, although not self-driving, on a wheelchair used by a person with ALS (amyotrophic lateral sclerosis, or motor neurone disease). He’s been using the technology for about five months. The main results have been a proof of concept of the simplified interface to devices and a much better understanding of how to make a system like this work for real, without a team of graduate students around to fix it every day. 

If you won the UAE AI & Robotics Award for Good, what would you do next?
If we win, then the award will fund the remaining development, testing and deployment of our system. We’re hoping to release the plans under an open-source licence and to help people build it and install it on their own chairs. The award would also help us to get the word out to the community of full-time power wheelchair users, and support people who can’t afford the full cost of the system themselves.

HyRIZON helps to tackle problems including death at sea, drug runners and people smugglers, illegal fishing and piracy.

Our target customers are institutions with maritime security or safety concerns. The best example in Europe is Frontex: The Mediterranean borders are seeing a rising number of fatalities (3930 dead or missing for the year to 28 October 2016).

UNHCR estimated over 327,800 people had made the Mediterranean crossing in the first 10 months of 2016 alone. The EU Frontex agency need to monitor this huge area under clouds with good resolution and the ability to respond quickly. Low level unmanned aircraft or unmanned boats or even manned boats augmented with our HyRIZON intelligent payloads will greatly assist Frontex and organisations with a similar maritime remit.

We will do this by using a detection system that provides unprecedented capability to find objects in 41 spectral bands with extremely low size, weight and power (SWAP). This allows HyRIZON to be mounted in long endurance systems, exactly as intended. Continuously calibrated by an integrated spectrometer and loaded with the latest machine vision algorithms, HyRIZON provides reliable, high accuracy detections in varying conditions and long durations on the sort of platforms that can patrol our oceans, with low bandwidth satellite data links.

It is virtually impossible to hide from automated hyperspectral scanning; even camouflaged targets will show a high contrast band somewhere in the spectrum. Our algorithms take advantage of this to detect hidden objects whilst reducing the data presented to a human analyst. Our sensor payloads use the latest interferometry-based printed spectral filters to reduce mass and size, along with advanced embedded intelligence to automatically detect and discriminate objects or patterns of interest.

Having worked in this area for over a decade (including commercial and governmental customers), we have seen significant demand for lower cost systems for a different subset of missions. Whichever altitude and size, long endurance aircraft run a constant energy balance. They cannot afford to carry large, heavy payloads. For missions that involve remote operation over long distances, the payload has to include some decision ability. It has to include AI, or the platforms cannot perform their missions and people will keep dying.

HyRIZON is specifically designed to mitigate against the need for a high data rate link – the intelligence is on board. If the aircraft has not passed any targets of interest, it will simply send the regular status message to confirm location, health and adherence to the commanded plan. It is only when a potential detection occurs that the aircraft will send images.

 

A Dutch designer is determined to make robots less robotic

The words ‘robot’ and ‘robotic’ conjure up images of rigid, unemotional automatons that are as far removed from sensitive, hot-blooded human beings as you can get. But as robots creep increasingly into our everyday lives, one designer is out to change that feeling.

Rob Scharff, a researcher at Delft University of Technology in the Netherlands, has developed a 3-D printed soft robotic limb that responds to a human handshake by squeezing your hand back, mimicking human-to-human interaction.

“Currently, the feedback that robots are able to give humans is underdeveloped as compared to human-human communication,” explains Scharff. “In human-human communication, verbal communication is supported and complemented by body language. Integrating these human-like qualities in robotics can help to make communication with robots more intuitive.”

Scharff’s soft robotics prototype is printed from a flexible material and integrates air chambers in the palm of the robot’s hand, which expand and contract in response to pressure, such as from a human grasp, causing the robot’s fingers to grip either more or less. The fingers and thumb of the hand can be controlled separately and the robot’s wrist rotates in both directions – making the robot all the more human as it does so.

“Qualities [such] as movement and tactility become parameters that designers can play with to design expression,” he adds. “Designing a robot’s expression is no longer limited to making use of the existing actuators [like] screens and speakers, but can be deeply embedded in the design of the robot’s actuators, sensors and body.”

The robotic limb was on show at Dutch Design Week 2015 in October, an annual event to show off the designs of the future. Scharff is looking at developing the technology into custom 3D printed gloves that can help stroke victims learn to grip objects again. Development of soft robotics can have many uses; making hitherto cold, rigid robots seem much softer and human-like holds much promise in the area of prosthetics, care robots, and even industrial grippers where a delicate touch is required. Such developments might go some way to making robots less robotic.

RE-ACT Team

We propose a novel robotic system for physiotherapy and rehabilitation of the upper limb (arm). It will look to address pathologies from post-stroke neuromuscular deficiencies to cerebral palsy in infants.

Our robot is primarily for home use and for patients suffering with loss of motor control. Worldwide, up to 1 billion people suffer from neurological disorders, many of them disabling. It is estimated that in the UAE alone a person suffers a stroke every hour. We could offer these patients a greater chance to have a life that works around their motor disability.

Existing technologies, such as exoskeletons or haptic manipulator, have the proven advantage of enhanced involvement of the patient and, in many cases, of faster and more effective rehabilitation results. They are, however, expensive, and only available on site at few hospitals or specialised clinical centres. Moreover, they are usually rarely used as they require highly-trained staff to be able to deliver the therapy.

With the RE-ACT robot, the patient would be able to benefit from affordable and effective assistive robotic rehabilitation, not only at hospitals but also at home.

prototype.jpg

The RE-ACT is designed to be a paradigm shift in robotic rehabilitation. Traditional methods of inducing human motion require large and heavy-duty robots, which usually wrap around the limbs causing constraint motion. Our proposed idea will create a more natural and lightweight motion that will not require the user to wear exoskeletons.

The system is implemented with multiple layers of safety to ensure the safety of the patient and it guarantees ease of use and full autonomy that guides the user through the stages of therapy.

We have ensured that the functioning robot is fully autonomous, so that minimal input from the user is needed, thus the robot guides the user in achieving tasks instead of the user commanding the robot. In addition, the robot has machine learning algorithms that enable it to use past data to adapt to the capabilities of the user and to increase or decrease the difficulties of the tasks.

Adaptation key to survival for robots that learn to live with damaged parts

The self healing hexapod robot

 

Robots can be surprisingly fragile. A breakdown in a key component can leave even the most advanced and expensive machines disabled or functioning below peak performance.

While the smartest of self-learning machines can adapt to the breakdowns and resume their normal function, this has traditionally been a slow process, as the robots’ programmes work through thousands upon thousands of options. Now though researchers have developed algorithms which speed the learning process, cutting time for adaptation to minutes instead of hours.

Detailed in the journal Nature, researchers have shown how giving some additional guidance to a trial-and-error algorithm can slash the time it takes a robot to figure out how to get back to work. In theory, robots could be taught ‘previous experience’ helping them to eliminate the myriad of bad options from the choices they consider as they adapt to any damage.

Trials of the algorithm with a six-legged hexapod robot showed how a damaged machine could rapidly figure out how it was affected by the damage and find an alternative way to work. If the work continues to be successful it could represent a major step forward in creating adaptable robots at a much lower price point than current levels.

 

Marsi Bionics is a tech-based startup that develops robotic aids for locomotion and gait rehabilitation.

Spun off from the Centre for Automation and Robotics (CAR) – a joint centre between the Spanish National Research Council (CSIC) and the Technical University of Madrid (UPM) – over 20 years of robot locomotion know-how has been transferred to this SME, founded in 2013.

60 million people in the world have lost the ability to walk, of which 17 million are children affected by a number of neurological diseases. While wheelchair-bound, they suffer from a number of physiological and psychological side effects. Their quality of life could be greatly improved if walking could be repaired, restored or rehabilitated.

Marsi Bionics’ solution is gait exoskeletons; robotic devices powered by motors that are attached to a person in order to assist in walking.

There are five companies in the market that develop wearable gait exoskeletons, all of whose products are created to assist the in-hospital gait training of adult paraplegics. They reproduce a pre-programmed gait pattern and require additional crutches or walkers to help the postural stability, and only cover 3 per cent of the potential market of wheelchair users. Buying one of these requires an investment between €70 and €130.

Marsi Bionics’ ATLAS2020 paediatric exoskeleton is the first and only wearable paediatric gait exoskeleton in the market.

The exoskeletons have internationally-patented intelligent compliant actuation technology, which mimics the biological muscle-tendon unit. This allows gait therapy to be personalised, adapting over time to each patient’s symptomatology and illness progression. This unique feature is what differentiates Marsi Bionics from other similar companies.

Marsi Bionics™ wearable exoskeletons are targeted to patients affected by neuromuscular diseases, spinal cord injury, cerebral palsy, and also active aging of the elderly. These constitute 95 per cent of the potential market. There are three models currently in the market:

  • ATLAS Pediatric Series: designed for children affected by NMDs, CP, etc. These are the first and only gait exoskeletons for children in the market
  • MBGold: designed for the elderly
  • MBActive Knee: active knee orthosis for rehabilitation from a stroke or polio

Social and economic impacts of the technology include:

  • Life expectancy of children affected by NMDs increased by 50 per cent
  • Quality of life improved for affected families
  • Reduction of personnel costs by an estimated €20,000 per year per family
  • A €32m reduction of healthcare costs for governments per country per year

Marsi Bionics exoskeletons have been already evaluated in clinical trials with success.

As IBM’s supercomputer, Watson, gets stuck into learning Arabic, the AI technology aims to transform business in the Middle East in everything from healthcare to banking

IBM's Watson supercomputer

We often read that Big Data heralds Big Promise. But it also comes with a Big Problem – how to turn all that information into something coherent and useful. Step forward artificial intelligence (AI) and, specifically, Watson.

Developed by tech giant IBM, Watson is a supercomputer that understands human language, crunches vast amounts of data, learns our preferences (rather than being constantly re-programmed) and offers up tailored analysis. Thanks to a joint venture between IBM and Abu Dhabi’s Mubadala in July, Watson’s super brain is now also available in the Middle East.

“Experts are struggling to keep up with an overwhelming sea of information,” says Sunil Mahajan, from IBM Middle East & Africa’s analytics unit. “Watson can understand that information and bridge gaps in our knowledge, helping us to glean better insights.”

Watson is what IBM calls cloud-delivered cognitive computing. Using the same learning processes as humans do, Watson analyses mountains of information – from research studies to tweets – to become like a human expert in a particular area, only at “incredible scale” explains Mahajan.

The tie-up puts all this computing power at the service of the region’s industries, such as healthcare, retail, education, banking and finance. For example, when it comes to financial services Watson can help with sifting through data to help with investment choices, trading patterns or risk management, says Mahajan.

Still, it is in healthcare that IBM’s AI claims the most promise. By putting in an individual’s data, Watson could issue personalised health advice. The idea is that Watson helps humans make better decisions, by coming up with the probability of which cancer treatment is best for a particular patient, for example. Watson will also soon be able to ‘see’ images, adding to the treasure trove of information: “IBM plans to acquire [US-based] Merge [Healthcare Incorporated] in an effort to unlock the value of medical images to help physicians make better patient care decisions,” says Mahajan.

In the UAE, the computing technology will be housed at Injazat, Mubadala’s IT subsidiary. The name of the joint venture company is still to be decided. The venture hopes to take advantage of a public cloud-service market in MENA forecast to grow at around 17 per cent this year, according to IBM. There are currently more than 300 partners building Watson apps globally, says the firm.

The supercomputer shot to fame in 2011 when it appeared on an edition of the US television quiz show Jeopardy! where it beat two (human) trivia champions, understanding and answering questions in natural language.

Once Watson, named after former IBM president Thomas J Watson, gets up to speed in Arabic the prospects for its application in the region are significant.

“The Middle East is at an unprecedented turning point, with technology innovation fueling economic diversification and investment from overseas,” says Mounir Barakat, ‎executive director, information and communications technology, Mubadala. “Now is the right time to bring Watson to every decision maker keen on making informed decisions anywhere.”

 

Voters given access to video clips, detailed explanation about ideas of the projects, their mechanisms and future uses.

The Organizing Committee of the UAE Drones for Good Award, launched by His Highness Sheikh Mohammed bin Rashid Al Maktoum, Vice President and Prime Minister of the UAE and Ruler of Dubai, and the UAE AI & Robotics Award for Good, launched by His Highness Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum, Crown Prince of Dubai, announced that the Voting Platform for the awards received more than 250,000 votes until today (January 27) from all around the world.

The Voting Platform, which was launched on the websites of both the awards (www.dronesforgood.ae and www.roboticsforgood.ae), recorded huge interaction from the audience as they voted to choose the best qualified teams to the semi-final stage in the National and International competitions of the awards which will take place from February 4 to 6, 2016, in Dubai Internet City.

This platform provides an opportunity for the public to choose the best among more than 40 teams from the UAE and around the world in many sectors. They can identify the best innovative projects participating in the awards and have access to video clips, detailed explanation about the main ideas, the mechanisms of the applications and their uses in the future, and how they can be employed to serve all segments of the society.

Saif Al Aleeli, Chief Executive Officer of Dubai Museum of the Future Foundation and Coordinator General of the UAE Drones for Good Award and UAE AI & Robotics Award for Good, stressed the importance of public participation in the voting to display their interest in projects and innovations designed to promote the development process in the UAE and enhance the level of services provided to them in several sectors to increase their happiness and welfare in various fields.

The Organizing Committee urged the public to visit the online platform and engage in the voting process to identify the best projects that match their aspirations and needs in the future, and contribute to the development of services provided to them in vital sectors that directly affect their lives, such as education, health, and social services.

The closing date of the voting will be on Wednesday, February 3rd. The teams with top votes will be honoured irrespective of whether they are selected by the Judging Committees of the awards or not.

The Drones for Good Award aims to harness the technology of unmanned aircraft to improve the lives of people, whether in the UAE or anywhere in the world. It aims to design a legislative structure to provide services through advanced technology such as unmanned aircraft in the areas of serving humanity.

The International and National competitions of the UAE Drones for Good Award are divided into several categories: Environment, Education, Logistics, Transport, Construction and Infrastructure, Health, Civil Defence, Tourism, Social Services, Economic Development, and Humanitarian Aid.

The ‘UAE AI & Robotics Award for Good’ was launched by His Highness Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum, Crown Prince of Dubai, in February 2015, as one of the initiatives of International Council on Artificial Intelligence and Robotics, which was formed in collaboration with the World Economic Forum during Global Agenda Council hosted by the UAE Government last year.

It offers a first of its kind global platform for innovation, focusing on the practical side of this technology in areas of much relevance to the society such as health, education and social services.

Dytective

‘Dytective’ is a real solution that can change the opportunities of children with dyslexia from the beginning and addresses dyslexia before it becomes a problem.

Our dream is to make dyslexia detection easily available to everyone: detection and intervention for anyone, no matter your country or your income.

More than 10% of the population has dyslexia, a learning disorder that affects reading and writing but not general intelligence. Because the diagnosis of dyslexia is expensive and time-consuming, most children are diagnosed only after they have been failing in school for some time. As a result, intelligent and hard-working children fail, and they don’t know why. They are diagnosed too late for effective intervention, resulting in up to 40% of the school dropout rate being due to dyslexia.

We are changing this with a game designed to predict dyslexia at scale called ‘Dytective’. Children play the game for only 15 minutes. A machine-learning model recognizes patterns associated with dyslexia in how children play the game (their mouse movements, timing information, etc.). To make this feasible, activities in Dytective are constructed from:

  • Empirical, linguistic analysis of the errors that people with dyslexia make
  • Principles of language acquisition
  • Specific linguistic skills related to dyslexia.

pic-2.jpg

Experiments with 243 children and adults (95 with diagnosed dyslexia) show that it can work. Our model currently achieves an 86% accuracy in deciding whether a child has dyslexia or not. Importantly, its false negative rate is only 12%, meaning that only 12% of the time does it falsely predict that a child with dyslexia does not have dyslexia. For those in high school, the prediction accuracy is more than 97%. With more training data (we are working with partner schools to have 10,000 participants by April 2016), we believe Dytective will be able to determine whether children have dyslexia or not with very high accuracy.

Currently, Dytective works for Spanish, and will soon be available online. We are working with the government of Spain to deploy it in schools at a large scale. We are developing Dytective for different languages, including English, German and Arabic. Our overall aim is to reach the whole world’s population.