Special Issue(24-17): Image Processing for Next-Gen Robotics: Bridging the Physical and Digital
Posted on 2024-08-07
Image Processing for Next-Gen Robotics: Bridging the Physical and Digital
The main image processing methods used in robotics are segmentation, object recognition, picture filtering, and feature extraction. Filtering improves the quality of the image; segmentation separates items; feature extraction finds important components; and object recognition finds and classifies objects. Digital cameras can be added to a robot to act as its "eyes" and generate photos. The robot could utilise an object if it could identify it in these pictures. For example, in a manufacturing setting, the robot might assemble things with a screwdriver. It is possible to build next-generation robots that are more accurate and productive than humans at certain tasks. Increased productivity and efficiency may result from this, especially in industries like manufacturing where robots may operate continuously without the need for breaks.
Robotics is about to take a giant step ahead thanks to the introduction of artificial intelligence (AI) of the next generation. It is hoped that this potent combination would give robots enhanced cognitive capabilities, making it harder to distinguish between intelligent support and automation. The following three steps are essentially included in image processing: Using image acquisition tools to import the image; Analysing and modifying the image; and producing an output that may include a changed image or a report derived from image analysis. A program called Digital Image Processing (DIP) is used to alter digital images on a computer system. Additionally, it is employed to improve the pictures and extract some significant data from them. Take MATLAB and Adobe Photoshop, for instance. An image can undergo one of three forms of processing in general. These include the processing types that are explained below: low-level, intermediate-level, and high-level. Digital Image Processing (DIP) Areas. Enhanced visual quality: Digital image processing methods can enhance an image's clarity, sharpness, and informativeness. Automated image-based tasks: A variety of image-based jobs, including object recognition, pattern recognition, and measurement, can be automated by digital image processing.
Future automation and robotics could be used as teachers, explorers, assistants, colleagues, and surgeons. Robotics' capabilities will only grow as engineers and scientists work to develop and expand this technology. Robots are already a commonplace sight in many aspects of daily life. Robotics are employed in material handling sectors to choose, sort, package, and palletize finished products that are ready for distribution. Robotics is also widely used in the education, agricultural, and clinical lab sectors. Ameca is one of the world's most sophisticated robots. The business released a video in August in which Ameca could be seen sketching a cat. Soon after, the video became viral. Ameca was the first robot to sketch on her own when the company gave her the ability to do so utilising Stable Diffusion. Image processing can be done in two ways: Analog Image Processing involves enhancing hard copies of pictures, printouts, and other images. Computer algorithms are used in digital image processing to improve or recreate images. DIP is mostly concerned with creating a computer system that can process images. A digital image is the system's input. It processes the image using effective algorithms and outputs an image as the result. The most often used illustration is Adobe Photoshop. Articles are invited that explore Image Processing for Next-Gen Robotics: Bridging the Physical and Digital. Case studies and practitioner perspectives are also welcome.
Potential topics include but are not limited to the following:
Machine learning and next-generation cell phones are employed in a digital twin framework for smart greenhouse management.
- Increasing AEC capabilities through human computer collaboration and robotic fabrication.
- An industrialised robotic swarm testing facility in open spaces.
- An investigation of the obstacles, patterns, opportunities, and enabling technology.
- Vulnerable mixes on-demand adjustment using explainable and abnormal machine learning.
- An innovative paradigm for AI-driven urban development and architecture.
- Intelligent machines to further science and investigation of the moon and planets.
- Tentative assessment of an initial sophisticated virtual exhibition driven by robots, XR, and AI.
- Employing smartphone technology to improve sensory and cognitive function.
- Digital medical treatment, artificial intelligence, and sophisticated wireless combined.
- Concepts, applications, and anticipated developments of extended reality in Internet of Things settings.
- Edge improved binocular meta-lens depth perception.
Timeline:
Submission deadline: December 25, 2024
Author notification: February 25, 2025
Revised papers due: April 30, 2025
Final notification: June 30, 2025
The Publication of the special issue will as per the policy of journal
Credentials of Guest Editor team:
Dr. Emmanuel Gbenga Dada
Associate Professor,
Department of Computer Science,
University of Maiduguri,
Maiduguri, Nigeria.
Email id: [email protected], [email protected]
Profile Links:
https://scholar.google.com/citations?user=b5k7P-MAAAAJ&hl=en
https://www.researchgate.net/profile/Emmanuel-Dada
Dr. Stephen Joseph Bassi
Assistant Professor
Department of Computer Engineering,
University of Maiduguri,
Maiduguri, Nigeria.
Email id: [email protected]
Profile Links: https://scholar.google.com.my/citations?user=NFZXVnUAAAAJ&hl=en
https://www.researchgate.net/profile/Stephen-Joseph-4
Dr. Ayodele Lasisi
Assistant Professor
Department of Computer Science,
King Khalid University,
Abha, Saudi Arabia.
Email id: [email protected]
Profile Links: https://scholar.google.com/citations?user=OhAYVBQAAAAJ&hl=en
https://www.researchgate.net/profile/Ayodele-Lasisi