Coder Perfect

Is it true that message queues are no longer used in Linux?


I’ve been playing with message queues (System V, but POSIX should be ok too) in Linux recently and they seem perfect for my application, but after reading The Art of Unix Programming I’m not sure if they are really a good choice.

Is it still the case that the System V message queues are buggy in newer Linux versions? I’m not clear if the author is suggesting that POSIX message queues are acceptable.

Sockets appear to be the preferred IPC for practically everything(? ), but I don’t see how implementing message queues using sockets or something else would be very simple. Or am I overthinking things?

I’m not sure if it matters that I work with embedded Linux.

Asked by Purple Tentacle

Solution #1

Message queues are one of my favorite IPCs, and I believe they are the most under-utilized in the Unix world. They are quick and simple to use.

A couple of ideas:

Answered by Duck

Solution #2

Message queues, I believe, are appropriate for some applications. POSIX message queues have a richer interface; for example, instead of IDs, you may give your queues names, which is particularly beneficial for fault detection (makes it easier to see which is which).

The posix message queues may be mounted as a filesystem under Linux and viewed with “ls” and deleted with “rm,” which is quite useful (System V depends on the clunky “ipcs” and “ipcrm” commands)

Answered by MarkR

Solution #3

I’ve never utilized POSIX message queues because I like to keep the possibility of distributing my messages across a network open. With that in mind, a more robust message-passing interface, such as zeromq or something that implements AMQP, could be a good choice.

One of the wonderful things about 0mq is that it uses a lockless zero-copy method that is quite fast when utilized from the same process area in a multithreaded project. Messages can still be sent over a network using the same interface.

Answered by bmdhacks

Solution #4

The following are the most significant drawbacks of the POSIX message queue:

Unix Datagram socket does the same task of POSIX message queue. In the socket layer, the Unix Datagram socket is used. It is possible to use it with select()/poll() or other IO-wait methods. Using select()/poll() has the advantage when designing event-based system. It is possible to avoid busy loop in that way.

In the message queue, there is a pleasant surprise. Consider mq notify (). It’s used to acquire the receive-event object. It sounds like we can notify something about the message queue. But it is actually registering for notification instead of notifying anything.

More surprising about mq notify() is that it must be called after every mq receive(), which could result in a race issue (if another process/thread calls mq send() between mq receive() and mq notify()).

And it has a whole set of mq_open, mq_send(), mq_receive() and mq_close() with their own definition which is redundant and in some case inconsistent with socket open(),send(),recv() and close() method specification.

Message queues, in my opinion, should not be used for synchronization. This is where eventfd and signalfd come in handy.

However, it (the POSIX message queue) does handle real-time operations. It has features that are prioritized.

Messages are placed on the queue in decreasing order of priority, with newer messages of the same priority being placed after older messages with the same priority.

However, this priority is also available as out-of-band data for sockets!

Finally, POSIX message queue is an outdated API in my opinion. If real-time characteristics are not required, I always choose Unix Datagram socket over POSIX message queue.

Answered by KRoy

Solution #5

Message queues are extremely handy for creating decoupled local apps. They are extremely fast, block organized (no need for buffering, chopping, etc., as is the case with steaming sockets), and require only a few memcpy() operations (user code copy block to kernel, kernel copy block to other process reading from q), and that is the message delivery narrative. These queues are used by some well-known middlewares, such as Oracle Tuxedo and Mavimax Enduro/X, to aid in the development of load-balanced, high-performance, fault-tolerant deconstructed, distributed applications. These queues enable load balancing when multiple executables read from the same queue, and the kernel scheduler simply distributes the message to those processes that are idle. The advantage of Linux is that polling can be done on it.

Two processes, for example, can simply communicate locally across queues with fairly amazing throughput (70k req+rply/sec):

If networking is required, Enduro/X offers the tpbridge process, which receives messages from the local queue and sends blocks to another system, which then injects the messages back into the local queue.

When compared to sockets, queues do not have any concerns with busy/lingering sockets when, for example, some binary has crashed, i.e. the application can instantly begin reading the queues and performing the processing at startup.

Answered by Madars Vi

Post is based on