close
close

Robot videos: Unitree’s four-legged friends, Meta AI’s fingers and more

Video Friday is your weekly selection of awesome robotics videos collected from your friends at IEEE Spectrum Robotics. We also publish a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids 2024: 22-24 November 2024, NANCY, FRANCE

Have fun with today’s videos!

Just when I thought four-legged friends couldn’t impress me anymore…

[ Unitree Robotics ]

Researchers at Meta FAIR release several new research artifacts that advance robotics and support our goal of achieving advanced machine intelligence (AMI). These include Meta Sparsh, the first general-purpose encoder for vision-based tactile sensors, suitable for many tactile sensors and many tasks; Meta Digit 360, an artificial tactile fingertip sensor that provides detailed touch data with human precision and touch recognition; and Meta Digit Plexus, a standardized platform for robot sensor connections and interactions that enables seamless data collection, control and analysis over a single cable.

[ Meta ]

The first bimanual torso created at Clone includes an actuated elbow, cervical spine (neck), and anthropomorphic shoulders with the sternoclavicular, acromioclavicular, scapulothoracic, and glenohumeral joints. The valve matrix fits compactly into the chest. Bimanual manipulation training is underway.

[ Clone Inc. ]

Equipped with a new behavioral architecture, Nadia navigates and traverses many types of doors autonomously. Nadia also demonstrates robustness to failed handles and door opening attempts by automatically retrying and continuing. We present the robot with pull and push doors, four types of opening mechanisms and even spring-loaded door closers. A deep neural network and a door level estimator enable Nadia to identify and track the doors.

[ Paper preprint by authors from Florida Institute for Human and Machine Cognition ]

Thanks, Duncan!

In this study, we integrate the musculoskeletal humanoid Musashi with the wire-controlled robot CubiX, which is capable of connecting with the environment to form CubiXMusashi. This combination addresses the shortcomings of traditional musculoskeletal humanoids and allows movements beyond the capabilities of other humanoids. CubiXMusashi connects to the environment via wires and drives by coiling them, and successfully performs movements such as pulling up, getting up from a lying position, and mid-air kicks, which are difficult for Musashi alone.

[ CubiXMusashi, JSK Robotics Laboratory, University of Tokyo ]

Thank you, Shintaro!

An old boardwalk seems like a nightmare for any flat-footed robot.

[ Agility Robotics ]

This paper presents a novel learning-based control framework that uses keyframing to integrate high-level goals into the natural locomotion of legged robots. These high-level targets are specified as a variable number of partial or full pose targets randomly spaced in time. Our proposed framework utilizes a multi-critical reinforcement learning algorithm to effectively handle the mix of dense and sparse rewards. In the experiments, the multicritical method significantly reduces the effort of hyperparameter tuning compared to the standard single-critical alternative. Furthermore, the proposed transformer-based architecture enables robots to anticipate future goals, leading to quantitative improvements in their ability to achieve their goals.

[ Disney Research paper ]

Human-like walking, where this human is the stompiest human ever, making his way through Humanville.

[ Engineai ]

We present the first static obstacle avoidance method for quadcopters using only an integrated monocular event camera. Quadrotors are capable of flying quickly and nimbly in confusing environments when controlled manually. However, vision-based autonomous flight in unfamiliar environments is difficult in part due to the sensor limitations of traditional onboard cameras. However, event cameras promise almost no motion blur and a high dynamic range, but generate a large amount of events with significant self-motion and also do not have a continuous-time sensor model in the simulation, so a direct transfer from the simulation to reality is not possible.

[ Paper University of Pennsylvania and University of Zurich ]

Cross-embodiment imitation learning allows policies trained on specific embodiments to be transferred to different robots, unlocking the potential for large-scale imitation learning that is both cost-effective and highly reusable. This article introduces LEGATO, a cross-embodiment imitation learning framework for transferring visuomotor skills across different kinematic morphologies. We present a hand gripper that combines action and observation spaces, enabling consistent definition of tasks across robots.

[ LEGATO ]

The 2024 Xi’an Marathon has begun! STAR1, Robot Era’s all-purpose humanoid robot, accompanies runners to an exciting start in this ancient yet modern city!

[ Robot Era ]

There are valuable lessons in robotics for both students and mentors. See how the CyberKnights, a FIRST RTX-sponsored robotics team champion, overcame challenges after a poor performance with the encouragement of their RTX mentor and scrapped their robot to build a new one in just nine days.

[ CyberKnights ]

In this special video, PAL Robotics takes you behind the scenes of our 20th anniversary celebration, an unforgettable gathering with industry leaders and visionaries in robotics and technology. From inspirational speeches to milestone highlights, the event was a testament to our journey and the incredible partnerships that have shaped our journey.

[ PAL Robotics ]

Thank you, RugilÄ—!

From your website articles

Related articles on the Internet

You may also like...