A place to talk about anything and everything Omega related
@altr I'm not sure about OpenWrt 21, but check out this wiki about how to control the LED via a trigger.
I know it works as it is one of the earlier things I tried when learning about the Omega2.
The wiring is one issue (can't say for sure if it works with the expansion per se cause I have another breakout of the chip for testing purposes). Though, you can easily adapt this example -> https://www.element14.com/community/community/raspberry-pi/blog/2012/12/14/nfc-on-raspberrypi-with-pn532-py532lib-and-i2c and use I2C.
Lastly, there is no official documentation regarding the schematics of the NFC/RFID expansion but our tests showed that that onion has left out SPI/I2C support for their chip -sth that needs to be checked.
We are using python tools for now but I will get back to you if we find anything else!
After having the auto-provisioning up and running I still have a small imperfection in play.
Whenever mtd -r write <IMAGE-NAME>.bin firmware finishes it does not do the -r because it complains about not being able to [e]rase a block. ...it exits with 1
Writing from 20211027-pan35-blusser-9-5.bin to firmware ...
[e]Failed to erase block
mtd verify doesn't end well either.
After a reboot the omega seems to be running my custom firmware happily, so I dont worry too much yet, but it keeps nagging
I have followed the instructions closely, also tried a couple of times more to make a new .bin file using the dd method.
This is how that runs:
root@Pandora3-74F3:/mnt/mmcblk0# dd if=/dev/mtd3 of=20211108-pan35-blusser-9-5.bin
64896+0 records in
64896+0 records out
Any idea where i should look?
I have a simple question, how can you acces the point cloud of the camera? Or is points_3d the point cloud?
I have been experimenting with both the Tau Lidar camera and an external library called open3d, which is used to visualize point clouds. I tried to use points_3d to visualize the point cloud by converting it into a .ply file, but noticed the point cloud wasn't as accurate as i had hoped it to be.
Here are some examples as to what I mean:
A screenshot from what the camera sees (wooden block):
A screenshot from what the point cloud sees (wooden block):
A screenshot from what the camera sees (stairs):
A screenshot from what the point cloud sees (stairs):
This is something similiar to what I expect/desire, an accurate point cloud based of what the camera sees:
What is the correct way of getting the point cloud from the camera? Or is the points_3d list indeed the point cloud, but is it not getting an accurate result due to, for example, an incorrect integration time?
Thanks in advance for any help! I would love to hear what I am doing wrong or how I could get my wanted result. I would even be grateful for an explenation of how to actually get the point cloud. Thanks!