fast-gpio locking up



  • I have been experimenting with fast-gpio. When called from the shell, hand-inputting one command at a time, it is fine. When called from a ruby or python script, through shell calls, sometimes after several calls to fast-gpio in a row, the script freezes. This does not crash the Omega, but does render the process a zombie, and I have to background it, and then kill -9 it. I experienced perhaps a similar phenomenon when trying to use file reads and writes to the sysfs virtual gpio files, and after multiple operations the program would fail with fptr errors. There's something very BETA about the gpio support at this point in firmware 0.0.6 b275.
    Any ideas? My next step is to use SWIG to wrap the new-gpio C++ code, and then access that from Ruby/Python. Somewhat of a hack, but maybe the new-gpio lib will take care of whatever is overwhelming the controller by the way I'm trying to do things now.



  • @Justin-Sowers In what follows, I am going to assume that the fast-gpio calls you refer to are for pwm output. If I am wrong in this assumption, let me know and ignore the rest of this post šŸ˜ž

    When fast-gpio does pwm output, it does so by forking a separate process to keep the output going by pulsing the relevant pin. Thus every pwm call starts a new process thus there will be a new process created for every fast-gpio pwm call and there is no easy way to get rid of them other than by using kill as you indicate.

    While my new-gpio program does the same for a pwm output, it is smart enough to detect if there is already a process running pwm for the pin and first kills that process before starting a new process for the changed pwm output. Thus there will be at most one process for each pin for which new-gpio pwm has been called. While still not ideal, it does stop the proliferation of processes.
    So for a start, you could try using new-gpio pwm <n> <freq> <duty> for fast-gpio pwm <n> <freq> <duty>
    Note also that the new-gpio program can also be used to explicitly stop and kill the process for a pwm output using: new-gpio pwmstop <n>

    I am contemplating making some changes to the new-gpio program so that it forks at most one additional process and if/when the forked process is needed, this is detected and the changes are communicated to the forked process using shared memory. While doable, it is not trivial and may take me some time to get done.

    (As an aside, all the above comments relating to pwm also apply to the new-gpio irq usage since handling interrupts requires a separate process to be running to handle the interrupt)

    Finally, if you make calls to the methods in the library libnew-gpio from within a single program process, there are no separate processes forked and the pwm output continues so long as the calling program is still running - in this situation, the pwm output is handled by lightweight Posix threads that are started and stopped as required.



  • @Kit-Bishop Unfortunately, I'm not using pwm. I'm just doing straight pin on, and off calls, as well as reading states. The equivalent of 'fast-gpio set 6 1' for instance, to set the 6 pin to on at 2.xV. Multiple set 0/1, get-direction calls across several pins usually causes the lock-up, but there's variability in exactly when the lock-up occurs. That's why I think it's some sort of I/O buffering overload.



  • @Justin-Sowers Fair enough. Sorry I wasn't able to help.
    fast-gpio set 6 1 should not be creating any additional processes. It just uses hardware memory register access to set the pin value (as does the get-direction calls - only pwm causes fast-gpio to fork a separate process).

    So, it is a bit of a puzzle as to why you end up with multiple processes running.

    Since I really don't have any experience with Ruby or Python which you say you are using all I can suggest is that you look at what Ruby or Python are doing when they run the shell command fast-gpio set 6 1 etc. Perhaps others with more knowledge of these could advise.



Looks like your connection to Community was lost, please wait while we try to reconnect.