Coder Perfect

In Linux, how do I keep a background process from being killed when I close the SSH client?


I’m using SSH to connect to a Linux system (Putty). I need to keep a process going over night, so I thought I’d start it in the background (with an ampersand at the end of the command) and route the output to a file.

That, to my astonishment, does not work. The process comes to a halt as soon as I close the Putty window.

What can I do to keep that from happening?

Asked by GetFree

Solution #1

Take a look at the “nohup” app.

Answered by JesperE

Solution #2

I would advise you to use GNU Screen. It lets you disconnect from the server while keeping all of your processes running. I’m not sure how I survived without it before I discovered it.

Answered by gpojd

Solution #3

The process receives the SIGHUP signal when the session is closed, which it appears to ignore. To avoid this, use the nohup command when starting the process or the bash built-in command disown -h after the process has started:

> help disown
disown: disown [-h] [-ar] [jobspec ...]
     By default, removes each JOBSPEC argument from the table of active jobs.
    If the -h option is given, the job is not removed from the table, but is
    marked so that SIGHUP is not sent to the job if the shell receives a
    SIGHUP.  The -a option, when JOBSPEC is not supplied, means to remove all
    jobs from the job table; the -r option means to remove only running jobs.

Answered by Robert Gamble

Solution #4

daemonize? nohup? SCREEN? (tmux ftw, screen is a piece of garbage 😉

Simply double fork, as every other software has done since the beginning.

# ((exec sleep 30)&)
# grep PPid /proc/`pgrep sleep`/status
PPid:   1
# jobs
# disown
bash: disown: current: no such job

Bang! Finished:-) I’ve used it on a variety of programs and on a variety of old PCs. You can use redirects and other techniques to create a private channel between yourself and the process.

Create as



run_in_coproc () {
    echo "coproc[$1] -> main"
    read -r; echo $REPLY

# dynamic-coprocess-generator. nice.
_coproc () {
    local i o e n=${1//[^A-Za-z0-9_]}; shift
    exec {i}<> <(:) {o}<> >(:) {e}<> >(:)
. /dev/stdin <<COPROC "${@}"
    (("\$@")&) <&$i >&$o 2>&$e
    $n=( $o $i $e )

# pi-rads-of-awesome?
for x in {0..5}; do
    _coproc COPROC$x run_in_coproc $x
    declare -p COPROC$x

for x in COPROC{0..5}; do
. /dev/stdin <<RUN
    read -r -u \${$x[0]}; echo \$REPLY
    echo "$x <- main" >&\${$x[1]}
    read -r -u \${$x[0]}; echo \$REPLY

and then

# ./ 
declare -a COPROC0='([0]="21" [1]="16" [2]="23")'
declare -a COPROC1='([0]="24" [1]="19" [2]="26")'
declare -a COPROC2='([0]="27" [1]="22" [2]="29")'
declare -a COPROC3='([0]="30" [1]="25" [2]="32")'
declare -a COPROC4='([0]="33" [1]="28" [2]="35")'
declare -a COPROC5='([0]="36" [1]="31" [2]="38")'
coproc[0] -> main
COPROC0 <- main
coproc[1] -> main
COPROC1 <- main
coproc[2] -> main
COPROC2 <- main
coproc[3] -> main
COPROC3 <- main
coproc[4] -> main
COPROC4 <- main
coproc[5] -> main
COPROC5 <- main

And there you go, spawn whatever. the <(:) opens an anonymous pipe via process substitution, which dies, but the pipe sticks around because you have a handle to it. I usually do a sleep 1 instead of : because its slightly racy, and I’d get a “file busy” error — never happens if a real command is ran (eg, command true)

“heredoc sourcing”:

. /dev/stdin <<EOF

This works on every shell I’ve tried, including busybox and others (initramfs). I’d never seen it done before, and I discovered it on my own while poking around; who knew source could accept args? However, it frequently acts as a much more manageable kind of evaluation, if such a thing exists.

Answered by anthonyrisinger

Solution #5

nohup blah &

Replace blah with the name of your process.

Answered by Brian Knoblauch

Post is based on