We are still in the design phase of our project, however we are considering using an embedded Linux kernel to run three independent processes. A communications module will be one of the processes, which will manage all communications to and from the device via various means.
The communication process will require the other two processes to be able to send and receive messages. I’m trying to test Linux’s IPC capabilities; the messages the other processes will deliver will range in size from debug logs to streaming media at a rate of 5 Mbits. Additionally, media could be streaming in and out at the same time.
For this application, which IPC mechanism would you recommend? http://en.wikipedia.org/wiki/Inter-process communication
If that makes a difference, the processor is running at 400-500 Mhz. It is not necessary for it to be cross-platform; merely Linux will suffice. It is necessary to write the code in C or C++.
Asked by RishiD
When choosing an IPC, factors such as transfer buffer sizes, data transfer techniques, memory allocation schemes, locking mechanism implementations, and even code complexity should be considered.
Unix domain sockets or named pipes are the most often used IPC techniques in terms of performance (FIFOs). According to a report titled Performance Analysis of Various Mechanisms for Inter-process Communication, Unix domain sockets for interprocess communication may deliver the best performance. I’ve observed mixed results elsewhere, indicating that pipes may be preferable.
When sending small amounts of data, I prefer named pipes (FIFOs) for their simplicity. For bi-directional communication, a pair of named pipes is required. Unix domain sockets need a little more effort to set up (socket creation, initialization, and connection), but they are more versatile and may provide superior performance (higher throughput).
To figure out what will work best for you, you may need to perform some benchmarks for your individual application/environment. Unix domain sockets appear to be the best fit based on the description supplied.
Beej’s Guide to Unix IPC is good for getting started with Linux/Unix IPC.
Answered by jschmier
I would go for Unix Domain Sockets: less overhead than IP sockets (i.e. no inter-machine comms) but same convenience otherwise.
Answered by jldupont
I’m surprised no one has mentioned dbus.
If your application is structurally simple, it may be overkill; but, in a controlled embedded environment where performance is critical, shared memory is unbeatable.
Answered by Dipstick
If performance becomes an issue, you can utilize shared memory, but it’s a lot more difficult than the previous approaches; you’ll need a signaling system (semaphore, etc.) to indicate that data is available, as well as locks to prohibit concurrent access to structures while they’re being modified.
The upside is that you can transfer a lot of data without having to copy it in memory, which will definitely improve performance in some cases.
Perhaps there are libraries that can be used to give higher-level primitives using shared memory.
Shared memory is usually obtained by mmapping the same file with MAP SHARED (which can be on a tmpfs if you don’t want it persisted); but, many apps also use System V shared memory (IMHO for silly historical reasons; it’s a far less attractive interface to the same thing).
Answered by MarkR
Kdbus and Binder have been removed from the Linux kernel’s staging branch as of this writing (November 2014). At this time, there is no assurance that either will make it in, but the outlook for both is encouraging. Binder is a lightweight IPC mechanism in Android, Kdbus is a dbus-like IPC mechanism in the kernel which reduces context switch thus greatly speeding up messaging.
There is also “Transparent Inter-Process Communication” or TIPC, which is robust, useful for clustering and multi-node set ups; http://tipc.sourceforge.net/
Answered by jeremiah
Post is based on https://stackoverflow.com/questions/2281204/which-linux-ipc-technique-to-use