Problem
I’ve lately heard a few folks suggest that on Linux, it’s nearly always better to use processes rather than threads, because Linux is so good at handling processes and threads have so many issues (such as locking). However, I am skeptical because threads appear to provide a significant speed boost in several instances.
So, when faced with a circumstance that both threads and processes could handle reasonably well, should I use threads or processes? Should I use processes or threads (or a combination) if I’m constructing a web server, for example?
Asked by user17918
Solution #1
Linux has a one-to-one threading architecture, with no distinction between processes and threads (to the kernel) — everything is just a runnable job.
On Linux, the clone command duplicates a job with a configurable level of sharing, such as:
fork() calls clone(least sharing) and pthread_create() calls clone(most sharing). **
Because of duplicating tables and generating COW mappings for memory, forking costs a little more than pthread createing, but the Linux kernel developers have tried (and succeeded) to keep those costs as low as possible.
Switching between activities will be slightly cheaper if they share the same memory space and tables than if they don’t, because the data may already be loaded in cache. Even if nothing is shared, switching jobs is still incredibly fast, which is something that Linux kernel engineers strive for (and succeed at ensuring).
In reality, in a multi-processor system, not sharing may be helpful to performance: synchronising shared memory is expensive if each task is running on a different CPU.
* It has been simplified. CLONE THREAD shares the signal delivery (which requires CLONE SIGHAND, which shares the signal handler table).
** Simplified version. Although the SYS fork and SYS clone syscalls exist, they are both very thin wrappers around the same do fork function, which is itself a thin wrapper around copy process in the kernel. Yes, in the Linux kernel, the terms process, thread, and task are used interchangeably…
Answered by ephemient
Solution #2
You have a third choice with Linux (and certainly Unix).
Create a standalone executable that handles portion (or all) of your application and call it for each process individually, for example, the program launches duplicates of itself to delegate responsibilities to.
Create a standalone executable which starts up with a single thread and create additional threads to do some tasks
This is unique in that it is only available under Linux/Unix. A forked process is truly its own process with its own address space; unlike a thread, the child cannot (usually) alter its parent’s or siblings’ address space, providing additional resilience.
The memory pages, on the other hand, are not copied; instead, they are copied-on-write, which means that less memory is needed than you might think.
Consider the following two phases in a web server program:
Step 1 would be done once, and step 2 would be done in numerous threads if you used threads. Steps 1 and 2 would have to be repeated for each process, and the RAM used to hold the configuration and runtime data would have to be duplicated if you utilized “conventional” processes. If you used fork(), you could do step 1 once and then fork(), leaving the runtime data and settings intact and not replicated.
As a result, there are just three options.
Answered by MarkR
Solution #3
This is dependent on a number of things. Processes are heavier than threads, thus their starting and shutdown costs are larger. Interprocess communication (IPC) is also harder and slower than interthread communication.
Processes, on the other hand, are safer and more secure than threads since they each execute in their own virtual address space. When a process crashes or has a buffer overrun, it has no effect on other processes; but, when a thread crashes, it shuts down all of the other threads in the process, and when a thread has a buffer overrun, it exposes all of the threads to a security hole.
If you can afford the startup and shutdown expenses, you should probably employ processes if your application’s modules can function primarily independently with little communication. IPC will have a minor performance impact, and you’ll be slightly less vulnerable to bugs and security flaws. If you require maximum performance or have a large amount of shared data (such as complicated data structures), threads are the way to go.
Answered by Adam Rosenfield
Solution #4
Others have spoken about the issues.
Perhaps the most significant difference is that processes in Windows are significantly heavier and more expensive than threads, whereas in Linux the difference is much smaller, causing the equation to balance at a different point.
Answered by dmckee — ex-moderator kitten
Solution #5
Once upon a time, there was Unix, and there was a lot of overhead for processes in that good old Unix, so some brilliant individuals devised threads that shared the same address space as the parent process and only required a reduced context transition, making the context switch more efficient.
In modern Linux (2.6.x), there isn’t much of a performance difference between a process’s context switch and a thread’s context switch (only the MMU stuff is additional for the thread). The shared address space problem indicates that a bad pointer in one thread can corrupt memory in the parent process or another thread in the same address space.
A process is protected by the MMU, so a faulty pointer will just cause a signal 11 and no corruption.
In general, I’d choose processes (not much context switching overhead in Linux, but memory protection thanks to MMU), but pthreads if I needed a real-time scheduler class, which is a very different kettle of fish.
Why do you believe threads provide such a significant performance boost in Linux? Is there any evidence supporting this, or is it simply a legend?
Answered by robert.berger
Post is based on https://stackoverflow.com/questions/807506/threads-vs-processes-in-linux