2 min

The human hand happens to be one of the most advanced natural creations. It is also one of the most highly sought-after goals of artificial intelligence and robotics scientists.

A robotic hand with our range and ability could manipulate objects in a way that only we can, changing things across homes, offices, warehouses, and factories.

Even with all the progress made in the field, research on robotic hands remains prohibitively expensive and is limited to just a few companies and labs with deep pockets.

Robotic hand research is about to get cheaper

Something has changed now, with new pledges to make robotics research available to not-so-wealthy organizations.

Researchers at the University of Toronto, Nvidia, and other organizations have banded together to present a new system that uses highly efficient deep reinforcement learning techniques and optimized simulated environments to train robotic hands at a fraction of what it usually costs.

The tech to create robots that are like us is not here yet, but focusing on one function could take it to a highly advanced level and includes tasks like training a robotic hand.

Where are we?

OpenAI demoed their robotic hand Dactyl in 2019, showing off an impressive level of dexterity that was still inferior to what humans can do. It took 13,000 years’ worth of training to get it there. How do you fit that many years into a short time?

The fortunate thing is that you can train agents concurrently and then combine what all agents have learned into making the final model.

Speed is costly, meaning even wealthy organizations find it prohibitively expensive. Simulated environments help to do this for less money before testing it out on a physical hand. That’s what this collective wants to make available.