TECHNOLOGY

Impact Of Artificial Intelligence On The Robotics Industry

Robotics Industry, Rodney Brooks claims that artificial intelligence is still in its infancy. For the foreseeable future, machine intelligence cannot compete with human intelligence. Rethink Robotics

Researchers and manufacturers teach robots how to handle complex tasks using artificial intelligence (AI), but their capabilities lag behind people’s ideas of what robots can do. Artificial intelligence is now defined more generally than before, which may create some misunderstandings. 

Researchers and entrepreneurs with decades of experience working on artificial intelligence are trying to help people better understand its difficult nature. They work to remove some confusion and misconceptions about artificial intelligence and show how it is used in robotics for industrial applications.

“I think the biggest mistake is about how far it has come,” said Rodney Brooks, president, and chief technical officer of Rethink Robotics. “We have been working on artificial intelligence under this name since 1956 when its founder John McCarthy began to use the term ‘artificial intelligence, so it has been around for 62 years. However, it is much more complicated than physics, and it has taken physics a very long time. I think artificial intelligence is still in its infancy. “

Brooks believes that the popularity of artificial intelligence stems from recent stunning media presentations of anthropomorphic and animal-inspired robots or spectator-appreciative sports in which artificial intelligence systems have been pitted against people to play chess, Jeopardy !, table tennis, and go. Artificial intelligence is here, but so far, it is only taking children’s steps.

There is a certain misunderstanding that people confuse machine performance with capabilities. When we see a person performing a certain action, we can assume the general abilities, i.e., skills, and talents, that this person must have in order to perform the action. This is not the case with artificial intelligence.

“An artificial intelligence system can play chess fantastically, but it doesn’t even know it’s playing a game,” Brooks said. “We confuse the performance of machines with their capabilities. When we see that a program has learned something that one can learn, we mistakenly think it has as rich an understanding as we do.”

What artificial intelligence is and what is not

Artificial intelligence has become a fashionable marketing concept. As before with the “robot,” everything now seems to have “artificial intelligence.” It is sometimes difficult to determine what artificial intelligence is and what it is not. Even experts are reluctant to say for sure what artificial intelligence is or is not. As Rodney Brooks noted, what was considered artificial intelligence in the 1960s is now taught in the first lesson of computer programming. But it’s not called artificial intelligence.

Pieter Abbeel is introducing the results of groundbreaking machine learning research into industrial applications for real-world robots that can learn new skills on their own. Image courtesy of Embodied Intelligence / RIA

“At some point, something called artificial intelligence,” Brooks points out, “but later it becomes just computer science.” Machine learning and all its variations, including deep learning, reinforced learning, and imitation learning, are subsets of artificial intelligence.” Artificial intelligence has been a very narrow field for some time. Some people have very specifically classified it as a search-based technician,” said Ken Goldberg, an honorary professor of industrial technology and operations research at the University of California (UC), Berkeley. “The concept of artificial intelligence is now generally considered an umbrella term for robotics and machine learning, so it now covers a number of sub-disciplines.”

Advanced forms of computer vision are a form of artificial intelligence. “Just checking that the screw is in the right place has been here since the 1960s. Calling it artificial intelligence would be an exaggeration, “Goldberg warns. “However, we already generally consider computer vision systems that can recognize workers’ faces as artificial intelligence. That’s a much more sophisticated challenge. “

Insufficient context

Context is important for distinguishing between human intelligence and machine intelligence. As humans, we fully understand the world around us. Artificial intelligence lacks it.

“We’ve been working on the context in artificial intelligence for 60 years, and we’re far from reaching our goal,” Brooks continues. We have been successful in some very narrow directions, and this is something revolutionary today, these narrow areas. Speech recognition is undoubtedly radically different from what it was used ten years ago. I was joking that you could say ‘2’ in speech recognition systems to be disappointed. That is no longer the case today. “

He cites Amazon’s virtual assistant Alex as an example. Others are Google Assistant and Apple Siri.

“You say something to Assistant Alex, and she basically understands it, even though there’s music playing in the background or other people talking in the room. It’s amazing how good she is, and she owes it to deep teaching. So there has been a significant improvement in some of these bottlenecks. We will make the best use of these narrow segments to create better products,” says Brooks.

“When I founded Rethink Robotics, we researched all commercially available speech recognition systems. At the time, we came to the conclusion that there was no point in introducing any speech recognition into robots in manufacturing plants. I think that has changed now. It may make sense. But that was not the case in 2008. “

Speech recognition compiles the correct strings of words. Brooks argues that exact strings of words are enough to do a lot of things, but they don’t mean the same cleverness that one has.

“That’s the difference,” he adds, “creating strings of words is a narrowly focused ability. And to keep it from being so narrow, there is still a long way to go.”

On the basis of these narrow abilities, many optimistic predictions have been made regarding artificial intelligence, which look at the role of man in the future rather pessimistically.

Real-world artificial intelligence research

Goldberg emphasizes the importance of plurality over singularity and underlines the importance of working together with diverse combinations of people and machines to solve problems and innovate. This collaboration is especially important when artificial intelligence applications leave the lab and enter the real world.

Pieter Abbeel, a professor in electrical engineering and computer science at UC Berkeley, who, as president and chief scientist’s of the Embodied Intelligence team, works to bring artificial intelligence to the world of industry, also emphasizes the importance of human-machine collaboration.

The collaborative robot with integrated artificial intelligence oversees the CNC lathe on the custom injection molding machine. It automates the process of increasing product quality and production efficiency by eliminating the need for the operator to perform repetitive tasks. Image courtesy of Rethink Robotics / RIA

Deep learning for robotic gripping

Goldberg’s Autolab laboratory has been dedicated to artificial intelligence for more than ten years. He uses it in projects in the field of cloud robotics, deep reinforcement learning, demonstration-based learning, and robust robotic gripping and handling technologies for warehouse logistics, home robotics, and surgical robotics.

The Dexterity Network (Dex-Net) project of this laboratory has shown that artificial intelligence can help robots learn to grasp objects of different sizes and shapes by uploading millions of 3D models of objects, images, and metrics for their capture to a deep learning neural network. Previously, robots learned to grab objects and manipulate them through repetitive exercises on various objects, which is a time-consuming process. By using synthetic point clouds instead of physical objects to train the neural network to recognize a robust grip, the latest versions of Dex-Net are much more efficient and achieve a 99% grip accuracy.

Goldberg believes that they will be able to develop a highly reliable robotic grip on a wide variety of fixed objects in the long term, such as tools, household items, packaged goods, and industrial parts.

Collaborative robots using deep learning

Rethink Robotics’ Intera 5 softwares aims to make Baxter and Sawyer collaborative robots smarter. Brooks argues that a great deal of artificial intelligence drives robots’ ability to see and learn.

“Traditional industrial robots didn’t have much intelligence,” Brooks said, “but we’re working to make it different in the future. We are introducing deep learning functions to robots. We are trying to cope with variations because, in our opinion, they are typical for 90% of production with the use of robots in the same space as people. “

Sawyer and Baxter’s robots have a demonstration-based learning feature that uses artificial intelligence.

“When you teach a robot by demonstrating, you show it certain things by moving its arm, and the robot drives a program called a behavior tree,” Brooks explains the process. “She will write down the program according to which she will run. You don’t have to write this program. “

Intera 5 is a graphical programming language. According to Brooks, you can view, edit, or write a program in the behavior tree, which then prevents the program from doing so automatically.

The operator with virtual reality goggles holds a motion sensor and remotely controls the robot. It shows him how to grab objects and manipulate them so that the robot can then learn how to implement new skills through enhanced learning. Image courtesy of Embodied Intelligence / RIA

Artificial intelligence changes robot programming

Artificial intelligence changes the way robots are programmed. Abbeel and his team at Embodied Intelligence use the power of artificial intelligence to help industrial robots learn new, complex skills.

Their work builds on the research that Pieter Abbeel led at UC Berkeley, where they made a major breakthrough in the use of imitation learning and deep reinforced learning in teaching robots to manipulate objects. The company uses a combination of sensors and controls to control the robot remotely. As for the sensors, the operator is wearing virtual reality (VR) goggles, which show the robot’s view through its camera.

In terms of control, the operator holds a handheld device that is part of a virtual reality suite. When the operator’s hand moves, this movement is sensed. The scanned coordinates and orientations are sent to a computer-controlled by the robot. In this way, the operator has direct control over the movement of the robot hook, similar to guiding a puppet.

“We allow people to integrate into the robot,” says Abbeel. “One can see with the eyes of a robot and control the robot’s hands.”

According to Abel, man is so agile that the robot’s hooks and our hands are incomparable. By operating through a virtual reality system, the operator is forced to respect the robot’s limitations.

“You teach a robot the essence of a skill by demonstrating it,” explains Abbeel. “It doesn’t mean she’ll be robotically fast at the moment. You do it at a human pace, which is slow for most robots. You teach a robot by demonstration. This is the first phase (imitation learning). In the second phase, the robot performs reinforced learning, where it learns through its own attempts and mistakes. The beauty is that the robot has already learned the essence of the task. Now the robot just needs to learn how to speed it up. However, through enhanced learning, it can be learned relatively quickly.”

Abbeel adds that their technology is particularly suitable for demanding computer vision and manipulation tasks that are currently too complex for traditional software programming methods.

Embodied Intelligence will eventually make this software available to other people to reprogram their robots and perform the tasks themselves. This will allow any company, large or small, to quickly reassign robots to other tasks.

Also Read : Strongly Increased Interest In Cybersecurity And Artificial Intelligence

Tech Tuskers

Recent Posts

Instagram Couldn’t Refresh Feed: What to Do and Why It Happens?

Instagram is currently one of the most widely used social media sites where individuals share…

1 month ago

Cybersecurity: The Essential Acculturation Of Employees!

The rise of AI is radically changing the situation regarding cybercrime, particularly in ​​disinformation and…

7 months ago

A Beginner’s Guide to a Washington Real Estate License Course

Washington is among the many states that are growing when it comes to real estate.…

8 months ago

Smart Strategies: Planning and Executing Successful Escalator Modernization Projects

Escalators, the dependable workhorses of today's world, dutifully transport us between levels in malls, airports,…

8 months ago

What Is A Computer Security Audit?

It is estimated that around 86% of companies lack sufficient security on their servers in…

9 months ago

Combating The Hidden Threats Of Unmanaged Connected Assets

Digital transformation has led to an explosion of connected devices, going far beyond what we…

9 months ago