Visit
Official Website

Fictron Industrial Supplies Sdn Bhd
No. 7 & 7A,
Jalan Tiara, Tiara Square,
Taman Perindustrian Sime UEP,
47600 Subang Jaya,
Selangor, Malaysia.
+603-8023 9829
+603-8023 7089
Fictron Industrial
Automation Pte Ltd

140 Paya Lebar Road, #03-01,
AZ @ Paya Lebar 409015,
Singapore.
+65 31388976
sg.sales@fictron.com

Home Robot Control for People With Disabilities

18 Apr 2019
Home Robot Control for People With Disabilities
View Full Size
Robots provide an opportunity to enable people to exist safely and comfortably in their homes as they grow older. In the near future (we’re all hoping), robots will be able to help us by cooking, cleaning, doing chores, and generally taking care of us, but they’re not yet at the point where they can do those sorts of things autonomously. Putting a human in the loop can help robots be useful more quickly, which is specifically significant for the society who would profit the most from this technology—specifically, folks with disabilities that make them more reliant on care.
 
The interface is structured around a first-person perspective, with a video feed streaming from the PR2’s head camera. Augmented reality markers show 3D space controls, provide visual estimates of how the robot will move when commands are executed, and also give feedback from other nonvisual sensors, like tactile sensors and obstacle detection. One of the finest difficulties is how to sufficiently represent the 3D workspace of the robot through a 2D screen, but a “3D peek” feature overlays a Kinect-based low resolution 3D model of the environment around the robot’s gripper, and then simulates a camera rotation. To keep the interface obtainable to users with only a mouse and single-click control, there are many different operation modes that can be selected, including:
 
Looking mode: Displays the mouse cursor as a pair of eyeballs, and the robot looks toward any point where the user clicks on the video.
 
Driving mode: Allows users to drive the robot in any direction without rotating, or to rotate the robot in place in either direction. The robot drives toward the location on the ground indicated by the cursor over the video when the user holds down the mouse button, and three overlaid traces show the selected movement direction, updating in real time. “Turn Left” and “Turn Right” buttons over the bottom corners of the camera view turn the robot in place.
 
Spine mode: Displays a vertical slider over the right edge of the image. The slider handle indicates the relative height of the robot’s spine, and moving the handle raises or lowers the spine accordingly. These direct manipulation features use the context provided by the video feed to allow users to specify their commands with respect to the world, rather than the robot, simplifying operation.
 
Left-hand and right-hand modes: Allow control of the position and orientation of the grippers in separate submodes, as well as opening and closing the gripper. In either mode, the head automatically tracks the robot’s fingertips, keeping the gripper centered in the video feed and eliminating the need to switch modes to keep the gripper in the camera view.
The grippers also have submodes for position control, orientation control, and grasping. This kind of interface is not going to be the fastest way to control a robot, but for some, it’s the only way. And as Henry says, he’s patient.
 
In a study of 15 disabled participants who took control of Georgia Tech’s PR2 over the Internet with very little instruction (a bit over an hour), this software interface proven both easy to use and effective. It’s truly not fast—simple tasks like picking up objects took most participants 5 minutes when it would take an able-bodied person 5 seconds, but as Kemp and Phillip Grice, a recent Georgia Tech Ph.D. graduate, point out in a recent PLOS ONE paper, “for individuals with profound motor deficits, slow task performance would still increase independence by enabling people to perform tasks for themselves that would not be possible without assistance.”
 
A split study with Henry, considered to be an “expert user,” demonstrated how much opportunities there is with a system like this:
 
Definitely, a PR2 is probably overkill for many of these tasks, and also not likely to be around to most people who could use an assistive robot. But the interface that Georgia Tech has developed here could be applied to countless different kinds of robots, including lower-cost arms (like UC Berkeley’s Blue) that wouldn’t necessarily need a mobile base to be effective. And if an arm could keep someone independent and comfortable for hours instead of a human caretaker, it’s possible that the technology could even pay for itself.



This article is originally posted on Tronserve.com

You have 0 items in you cart. Would you like to checkout now?
0 items