Robotics

Weekly Robotics Newsletter |

Issue 224

The survey I ran last week provided some precious feedback. 20% of the readers that took it, want to receive up to one weekly e-mail from Weekly Robotics, and I’ll make sure we keep it that way. When we start producing long-form content, it will only be featured in the weekly newsletter in the format you already know. The other feedback provided by you is also very appreciated and includes a fair amount of encouragement. Thanks! Some suggested that I should focus more on software, while others that we need to feature more mechanical and electronic parts of robotics. I will do my best to strike some balance in the content. Another interesting insight ( that I expected a bit since I was doing my studies at a technical university) is that 95% of readers identify as Man. If anyone has any thoughts on how to make the newsletter more inclusive, I will welcome your feedback! Now, back to work! As usual, the publication of the week section is manned by Rodrigo. Last week’s most clicked link was Tech Crunch Boston Scene Report, with 11.5% opens.


Sensoria Obscura: Event Cameras, Part I

Tangram Vision

I thought I knew everything about event cameras from the many times I’ve featured them in this newsletter. Oh, how wrong I was. In this Tangram Vision blog post, you can learn about these cameras and some concerns you need to consider before implementing them in your project. I never came across the term Megaevents per second (Mev/second). Maybe I should have paid more attention to what I’ve been reading, but this concept is crucial for applying these sensors. I’m looking forward to part 2 of this series and featuring it in the newsletter.

I want to use this opportunity to thank the Tangram Vision team for their blog. Their articles are always super informative and top quality. I wish more companies followed suit by producing highly-informative content like this.


TODO: Insert a Catchy ROS Paragraph Here

There is an election for a ROS 2 Technical Steering Community Representative. This year, there are some very high-quality applications from six candidates. You can learn more about the candidates and how to vote in this thread.

This discussion on ROS 2 Python nodes having high CPU usage, only when performing message serialization, raised quite some eyebrows on the WR Slack for patrons, and contributed to a lively discussion about language performance and ROS in general.

In this thread, madmage discusses ROS 1 -> ROS 2 migration considerations that I found insightful and will come back to it the next time I need to port some codebase.


Worlds hardest jigsaw vs. puzzle machine (all white)

YouTube (Stuff Made Here)

We covered the first part of this puzzle machine project back in the issue #208. The machine changed quite a bit since the first iteration, and Shane fed Puzzles manually this time. Fortunately, he had to do it only once(?). The lesson of reduced expectations for hobby projects like this is really powerful.


San Francisco police can now use robots to kill

Tech Crunch

This is a follow-up to the last week’s issue. San Francisco’s board of supervisors approved the project with an 8-3 vote, meaning the SF police can use a remotely operated robot equipped with lethal equipment.


Compliant Suction Gripper with Seamless Deployment and Retraction for Robust Picking against Depth and Tilt Errors

arXiv

Researchers from Seoul National University came up with an exciting suction gripper design that consists of an air tube with two embedded springs and a 3-way valve. The flexible design allows for picking up objects at various distances and orientations. The tests performed in the paper check the separation of up to 147.5 mm and up to 60 degrees tilt angle.


Robot Learns Human Trick for Not Falling Over

IEEE Spectrum

If you are a humanoid robot, it takes only one simple trick not to fall if your leg gives in. OK, maybe it’s more complicated since you don’t have a cerebral cortex, but researchers at Inria in France might have a solution for you.


Publication of the Week – Analyzing Infrastructure LiDAR Placement with Realistic LiDAR (2022)

arXiv

Recently I came across the Vehicle-to-Everything (V2X). V2X is a vehicular communication system with other devices, such as V2V (vehicle-to-vehicle), V2I (vehicle-to-infrastructure), and many others, to increase road safety, traffic efficiency, etc. This paper presents a method to place LiDAR sensors efficiently to get the best V2I data. The authors used the CARLA simulator and created a Realistic LiDAR Simulation (RLS) library to get more realistic point clouds. They tested three distinct road scenarios to evaluate both InfraD and InfraNUC metrics for the best sensor placement. The RLS library and detailed results are in this GitHub repository.


Business

Locus Robotics raises $117 million in new funding, valuing startup at $2 billion

Robotics & Automation News

Locus Robotics, a maker of autonomous mobile robots for fulfillment and distribution warehouses, has raised $117 million in Series F funding, led by Goldman Sachs Asset Management and G2 Venture Partners”.


MicroVision acquiring LiDAR maker Ibeo

The Robot Report

It seems that the LiDAR market is shrinking, with the latest news about potential merger between Ouster and Velodyne, and now MicroVision is acquiring some assets from Ibeo for up to 15 million Euros, after Ibeo recent insolvency filling.


Jobs

Below are the latest positions from our job board. If you want to learn more about paid job advertising, please check the board for more details.


Robot Navigation Engineer

Keybotic SL (Barcelona, Spain)

Keybotic is the startup with DARPA award-winning technology that has created Keyper, an autonomous robot dog for industrial inspections. We’re seeking a talented and highly motivated Robotics Navigation Engineer to solve navigation and estimation challenges to make Keyper work in the real-world.


I thought I knew everything about event cameras from the many times I’ve featured them in this newsletter. Oh, how wrong I was. In this Tangram Vision blog post, you can learn about these cameras and some concerns you need to consider before implementing them in your project. I never came across the term Megaevents per second (Mev/second). Maybe I should have paid more attention to what I’ve been reading, but this concept is crucial for applying these sensors. I’m looking forward to part 2 of this series and featuring it in the newsletter.

Source: https://www.weeklyrobotics.com/

Donovan Larsen

Donovan is a columnist and associate editor at the Dark News. He has written on everything from the politics to diversity issues in the workplace.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button