Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was at a ROS conference a few years ago. A presenter went up and gave a talk about lidar and point clouds, speaking about how obsolete optical cameras were now that lidar was readily available.

The next person gets up and gives a talk about all the advancements in generating point clouds from optical cameras.

By far the best part is how tied to specific versions of Ubuntu each ROS release is, just getting all the packages installed and running requires sacrificing a cat while chanting Hail Mary backwards in Latin.

 help



> By far the best part is how tied to specific versions of Ubuntu each ROS release is, just getting all the packages installed and running requires sacrificing a cat while chanting Hail Mary backwards in Latin.

100% this. I had a very, very miserable time setting up two systems and trying to get them running a version that was supported. The worst part is SBCs that stop getting OS updates and become permanently locked in to a specific version. Which also forces the rest of your hardware to use the same version. Using a Jetson Nano with Ubuntu 18.04 in 2022 was lots of fun...

Last year I met a couple of university students working on a robot and out of curiosity I asked what they were using as a microcontroller and the software stack. They were running ROS. When they said they still hadn't upgraded to ROS 2 yet, I could feel their pain...


Cameras have all the same problems of lidar, and none of the advantages. Show a lidar a 45' mirror/weather-glass to test what your $60k bought.

Monocular feature extraction has been around for decades, but is only reliable for people that never go outside in the sun/dust/rain. =3


The point was how fragmented the "community" has become.

Also, the talk about visual camera point could generation was very impressive.

Lidar also suffers from the environment, and angles, and reflection, and light sources.


In general, lidar is used to remove the ambiguity in a local ground scan, and cameras extrapolate overlapping texture gradients to guess distant surface structure ( documented in the old book https://www.amazon.com/Learning-OpenCV-Computer-Vision-Libra... .)

There are some fairly good FOSS tools around like COLMAP, if you want to learn why automatic monocular pose recovery and SfM is hard.

Real autonomous robotics is hard, and people make the same predictable mistakes every 4 years. Retrofitting a consumer Yarbo would be cool though. =3


I'm not sure how you ended up here.

I'm quite familiar with robotics and lidar, and ground planes, and ROS. Not sure what 4-year cycle you're talking about.


>Not sure what 4-year cycle you're talking about.

I observe many groups solving platform design issues also eventually move on to other projects or careers. The dozens of documented fools-errands from Academic literature and Commercial sectors are eventually lost again each time. There are various institutional structural reasons this occurs, but the outcome is usually the same.

One must accept many problems are not purely technical in nature, and unfortunately complex Mechatronics often tend to exhibit sustainability problems.

>I'm not sure how you ended up here.

Don't worry about it... =3


IMO the worst part about canonical having a tight grip on it.

Reinventing the build chain every other year is miserable too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: