I have no idea how that works but it sounds like fun to try. Thank you to all who have given suggestions. It’s a bit cheaper than the ZED, and probably more accurate than anything I could do with a PI camera. The Intel RealSense also seems promising. If the Raspberry Pi compute module really supports two cameras, and those cameras are close to synchronized, that sounds very promising. Like I said earlier, the ZED is a bit pricier than I want to go. As an off season project, the goal isn’t necessarily to have the best solution, but to learn how to do some things, and teach it to students. As much as anything else, I just want to implement some of the algorithms. I’m hoping the stereo vision provides a more accurate distance estimate to landmarks than other methods, like solvePnP. How are you planning on using stereo to help with localization? I’ve done a lot of work with localization and 3D camera stuff in FRC - I’d be happy to answer any questions you have. The frame synchronization really isn’t fun to DIY - ZED works fine and has nice libraries (though you’ll need a Jetson). I’d recommend just using an off the shelf stereo camera solution. Of course, the target then has to be isolated somehow in the camera. If you aren’t processing the image on an RPi or roboRIO but just get back some sort of distance value then that’s a light load on the CPU. Price is good ($180) and it comes in a case. I hadn’t paid attention to the Intel RealSense (D435) so I looked at its Amazon entry just now. For that high of resolution you may have to step up to an Odroid and there isn’t as vast of a support network compared to the RPi. I haven’t had time for much experimenting but I think the distance calculation that I have may max out another CPU. The Raspberry Pi frcvision with a modest Java GRIP pipeline maxs out 2 CPUs of the 4 CPUs. Microsoft cranks out thousands of those HD-3000 LifeCams and all of ours produce the same image. I don’t see why they couldn’t get two cameras to see the same. I complained but the company did nothing. The two images are slightly different colors - one seems a bit red and the other a bit green. Kayeton Technology model KYT-U100-960R1 Stereo USB2.0 camera module with non distortion lens I bought cheap ($90) and easy to experiment with and it works. If you have already done it, I would be very happy to try and imitate your solutions. So, I would rather find some other solution.Īnd so, before I go too far down a particular road, I’ll start by asking if there are any giants whose shoulders I can stand on. It barely fits into the newly revised cost limits, and it looks like it needs some pricey support systems to go with it. One option I did see from a thread here was the ZED stereolabs camera, but it’s a bit pricey for FIRST. I think if the robot is moving, the motion will be enough that two images taken “at the same time” won’t really be at the same time, and the difference will be enough to completely throw off the distance measurements. (Well, there might be more than one, but one at a time…) Somehow, I have to get two time-synchronized images from two different cameras. One tool that could be used as part of the solution is stereo vision. figuring out where the robot is based on cues in the environment. My off season focus is computer vision, and one topic I hope to get to is localization, i.e.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |