"Active Sonar" display
I want to build a project using the Onion Omega using a rotating webcam, opencv, node.js, p5.js (or similar alternative) to create an active display of sonar distances.
Nominally, I was thinking that it would be able to sketch out distances and outlines of objects on a 2D overhead plane-view. Of course, it could easily be extended, using one or two camera systems or kinect style cams, to display live maps if we so choose.
The main driver, for me, is that I want a 360 degree FOV overhead indicator for distances to the nearest object to my vehicle. This would help me with reversing or even, should I decide to move in that direction, depth mapping roads for crowdsourcing road management to the general public or fleet of government vehicles to more adequately fund repairs of our infrastructure.
To that end, I've been diligently trying (and succeeding) in porting node.js and opencv. My next task, as node.js is a horribly outdated version, is to try and compile the latest node.js and npm. I'm wondering if anyone would like to take what I've started, and help extend in any way they can? I'm already pretty good with building packages for the onion omega via the vanilla openwrt buildroot, and have made docker containers on dockerhub that contain these toolchains if needed. What I'd like is someone with immediate and leverage-able opencv experience to build the portion that would connect a webstream to the opencv backend and enable an example categorization algorithm for shapes. I figure this will be a good start for the harder machine learning problem of categorizing planes to adequately guess distances for display. Perhaps something akin to: https://www.cs.bris.ac.uk/~haines/Documents/BmvcPaperCamera11.pdf
Is there any interest in this system? I will end up creating this by myself anyway, as a free time project while my employment doesn't preclude me from dabbling in it, but I wanted the option for others to help and at the same time potentially have others direct my efforts to this end.