Author
Listed:
- Jacqueline Humphries
- Pepijn Van de Ven
- Nehal Amer
- Nitin Nandeshwar
- Alan Ryan
Abstract
Purpose - Maintaining the safety of the human is a major concern in factories where humans co-exist with robots and other physical tools. Typically, the area around the robots is monitored using lasers. However, lasers cannot distinguish between human and non-human objects in the robot’s path. Stopping or slowing down the robot when non-human objects approach is unproductive. This research contribution addresses that inefficiency by showing how computer-vision techniques can be used instead of lasers which improve up-time of the robot. Design/methodology/approach - A computer-vision safety system is presented. Image segmentation, 3D point clouds, face recognition, hand gesture recognition, speed and trajectory tracking and a digital twin are used. Using speed and separation, the robot’s speed is controlled based on the nearest location of humans accurate to their body shape. The computer-vision safety system is compared to a traditional laser measure. The system is evaluated in a controlled test, and in the field. Findings - Computer-vision and lasers are shown to be equivalent by a measure of relationship and measure of agreement.R2is given as 0.999983. The two methods are systematically producing similar results, as the bias is close to zero, at 0.060 mm. Using Bland–Altman analysis, 95% of the differences lie within the limits of maximum acceptable differences. Originality/value - In this paper an original model for future computer-vision safety systems is described which is equivalent to existing laser systems, identifies and adapts to particular humans and reduces the need to slow and stop systems thereby improving efficiency. The implication is that computer-vision can be used to substitute lasers and permit adaptive robotic control in human–robot collaboration systems.
Suggested Citation
Jacqueline Humphries & Pepijn Van de Ven & Nehal Amer & Nitin Nandeshwar & Alan Ryan, 2024.
"Managing safety of the human on the factory floor: a computer vision fusion approach,"
Technological Sustainability, Emerald Group Publishing Limited, vol. 3(3), pages 309-331, April.
Handle:
RePEc:eme:techsp:techs-12-2023-0054
DOI: 10.1108/TECHS-12-2023-0054
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eme:techsp:techs-12-2023-0054. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Emerald Support (email available below). General contact details of provider: .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.