We are still in the design phase of our project, however we are considering using an embedded Linux kernel to run three independent processes. A communications module will be one of the processes, which will manage all communications to and from the device via various means.
The communication process will require the other two processes to be able to send and receive messages. I’m trying to test Linux’s IPC capabilities; the messages the other processes will deliver will range in size from debug logs to streaming media at a rate of 5 Mbits. Additionally, media could be streaming in and out at the same time.
For this application, which IPC mechanism would you recommend? http://en.wikipedia.org/wiki/Inter-process communication
If that makes a difference, the processor is running at 400-500 Mhz. It is not necessary for it to be cross-platform; merely Linux will suffice. It is necessary to write the code in C or C++.
Asked by RishiD
When choosing an IPC, factors such as transfer buffer sizes, data transfer techniques, memory allocation schemes, locking mechanism implementations, and even code complexity should be considered.
Unix domain sockets or named pipes are the most often used IPC techniques in terms of performance (FIFOs). According to a report titled Performance Analysis of Various Mechanisms for Inter-process Communication, Unix domain sockets for interprocess communication may deliver the best performance. I’ve observed mixed results elsewhere, indicating that pipes may be preferable.
Because of their simplicity, I favor named pipes (FIFOs) when transferring modest amounts of data. For bi-directional communication, a pair of named pipes is required. Unix domain sockets need a little more effort to set up (socket creation, initialization, and connection), but they are more versatile and may provide superior performance (higher throughput).
To figure out what will work best for you, you may need to perform some benchmarks for your individual application/environment. Unix domain sockets appear to be the best fit based on the description supplied.
Beej’s Guide to Unix IPC is a nice place to start if you’re new to Linux/Unix IPC.
Answered by jschmier
Unix Domain Sockets would be my choice: they have less overhead than IP sockets (i.e. no inter-machine communications) but provide the same level of ease.
Answered by jldupont
I’m surprised no one has mentioned dbus.
If your application is structurally simple, it may be overkill; but, in a controlled embedded environment where performance is critical, shared memory is unbeatable.
Answered by Dipstick
If performance becomes an issue, you can utilize shared memory, but it’s a lot more difficult than the previous approaches; you’ll need a signaling system (semaphore, etc.) to indicate that data is available, as well as locks to prohibit concurrent access to structures while they’re being modified.
The upside is that you can transfer a lot of data without having to copy it in memory, which will definitely improve performance in some cases.
Perhaps there are libraries that can be used to give higher-level primitives using shared memory.
Shared memory is generally obtained by mmaping the same file using MAP_SHARED (which can be on a tmpfs if you don’t want it persisted); a lot of apps also use System V shared memory (IMHO for stupid historical reasons; it’s a much less nice interface to the same thing)
Answered by MarkR
Kdbus and Binder have been removed from the Linux kernel’s staging branch as of this writing (November 2014). At this time, there is no assurance that either will make it in, but the outlook for both is encouraging. Binder is an Android lightweight IPC mechanism, while Kdbus is a kernel-based dbus-like IPC mechanism that eliminates context switching and speeds up messaging.
TIPC (Transparent Inter-Process Communication) is a resilient protocol that can be used for clustering and multi-node setups (http://tipc.sourceforge.net/).
Answered by jeremiah
Post is based on https://stackoverflow.com/questions/2281204/which-linux-ipc-technique-to-use