Gesture Control in Laboratory Robots

The very beginning

In the early 1980s, Dr. Masahide Sasaki introduced the first fully automated laboratory, which used several robotic arms working in tandem along with conveyor belts and automated sample analyzers. Since then, the rapid improvement of robotics and the increasing need for industrialization have made robots increasingly popular in providing aid for a plethora of tasks. One example would be the mobile robot arm currently used in medical centers, home services, and space exploration.

Science Robot

Image Credit: metamorworks/Shutterstock.com

Current challenges

However, there are instances wherein some complicated environments relying on the mobile robot arm alone is not ideal as there is a need to move the robot arm to cooperate with each other. Therefore a human-computer fusion method would be advantageous. At the moment, the most efficient way to do that is via somatosensory interaction, which allows the users to directly use their limbs to interact with the device or scene of interest without employing additional control apparatuses. The somatosensory interaction can be divided into three categories; inertial sensing, optical sensing, and joint sensing inertia and optics.

The inertial sensing measures the motion signals of the user through a gravity sensor and subsequently converts the motion signals into control signals. Inertial sensing is very accurate and reliable, but its sensors are not user-friendly. Optical sensing extracts state information via images, then converts this information to control signals. Its advantages are its intuitive, easy to use nature, but the images can be easily be ‘’contaminated’’ by background illumination or motion. The joint sensing method can combine the advantage of the two previous methods; however, it is more demanding on the algorithm when it comes to producing the control signals.

The correspondence problem

With the current technology, motions are mostly preprogrammed off-line for a specific robot setup, or they can be generated by mapping motion capture data to the robot’s configuration. These techniques take into account the robot layout they are being used for, and as a result, they cannot be transferred to other robots. This issue is widely known in robotics as the correspondence problem. To circumvent this problem, G.Van de Perre et al. suggested using a novel end-effector mode for actions that require reaching for an object or pointing.

The user gives the position of the end-effector of the robot of interest. This is done by the place-at condition, where the user defines the desired en-effector position of the left and/or right arm, and an inverse kinematic problem is solved to calculate the joint angles needed to assume the requested position. The workspace of the robot needs to be taken into account.

Once the user requests the desired position, the method verifies if this point is in reach of the robot. This is done by approximating the workspace using maximum and minimum values for a spherical model for each hand. If indeed the desired point is calculated to be inside the range of the robot, an efficient trajectory towards that endpoint is calculated by imitating what humans do, which is a linear path between the initial and end positions. If a curvature is required to complete the action, the minimum amount of curvature is calculated as close as possible to the linear path.

Deep learning computer vision and robot-assisted surgery

In a study by Francisco Luongo et al., computer vision models were trained to predict the presence of a suturing gesture and also subtle differences such as needle positioning and driving and suture cinching. Annotated video data was used from which a dataset of short clips corresponding to moments of ‘’needle driving’’ and short clips of non-needle driving surgical activity. This was the ‘’identification dataset’’. The computational task of identifying an action via video is called action recognition in the computer vision domain.

Neural networks have recently shown their ability to extract the correct data from this spatiotemporal data. A typical example of such networks is the so-called ‘’two-stream networks’’ where two streams of inputs (RGB pixels, optical flow representation) are present. These inputs are passed through a standard feature extractor (such as a deep network), and the representations produced by these networks are then passed into a temporally recurrent classification layer.

Robotic Surgery

Image Credit: MAD.vertise/Shutterstock.com

Human-robot interaction as the focal point

Although significant steps have been made toward complete automation of tasks, human-robot interaction will still be the critical technique moving forward, especially for complicated tasks. However, with the fast-moving progress in the field of neural networks and deep learning, future robot configurations may be capable of performing highly complicated tasks such as surgeries fully autonomously.

Sources:

  • Robotic Laboratory Automation Boyd, James Science. 295 (5554): 517–518.
  • Reaching and pointing gestures calculated by a generic gesture system for social robots Greet Van de Perre et. al Robotics and Autonomous Systems 83 (2016) 32-43
  • Human–robot interaction based on gesture and movement recognition Xing Li, Signal Processing: Image Communication 81 (2020) 115686
  • Deep learning-based computer vision to recognize and classify suturing gestures in robot-assisted surgery Fransicso Luongo et al. Surgery 169 (2021) 1240e1244

Further Reading

Last Updated: May 16, 2022

Dr. Georgios Christofidis

Written by

Dr. Georgios Christofidis

Georgios is an experienced researcher who started as a freelance science editor during the last stages of his Ph.D. studies. He has a B.Sc. in Chemistry from the Aristotle University of Thessaloniki and an M.Sc. in Forensic Science from the University of Amsterdam. Currently, he is nearing the end of his Ph.D. project in Liverpool John Moores University, which is about latent fingermark development on fired cartridge cases.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Christofidis, Dr. Georgios. (2022, May 16). Gesture Control in Laboratory Robots. AZoLifeSciences. Retrieved on April 29, 2024 from https://www.azolifesciences.com/article/Gesture-Control-in-Laboratory-Robots.aspx.

  • MLA

    Christofidis, Dr. Georgios. "Gesture Control in Laboratory Robots". AZoLifeSciences. 29 April 2024. <https://www.azolifesciences.com/article/Gesture-Control-in-Laboratory-Robots.aspx>.

  • Chicago

    Christofidis, Dr. Georgios. "Gesture Control in Laboratory Robots". AZoLifeSciences. https://www.azolifesciences.com/article/Gesture-Control-in-Laboratory-Robots.aspx. (accessed April 29, 2024).

  • Harvard

    Christofidis, Dr. Georgios. 2022. Gesture Control in Laboratory Robots. AZoLifeSciences, viewed 29 April 2024, https://www.azolifesciences.com/article/Gesture-Control-in-Laboratory-Robots.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoLifeSciences.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.