Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inadequate Propagation of SIGTERM and SIGINT to Task Subprocesses #1408

Open
cake-monotone opened this issue Nov 23, 2023 · 12 comments
Open
Labels
state: needs triage Waiting to be triaged by a maintainer.

Comments

@cake-monotone
Copy link

Script: signal-test.sh

#!/bin/sh

# Set up signal traps
trap 'echo "Received SIGINT"' SIGINT
trap 'echo "Received SIGTERM"' SIGTERM

# Infinite loop to keep the script running
while true; do
    sleep 1
done

Task Configuration: Taskfile.yml

version: "3"

tasks:
  signal-test:
    cmds:
      - sh ./signal-test.sh

Given the process structure:

# ps output for reference
PID   PPID  CMD
2       1    task signal-test
3       2    sh ./signal-test.sh

When executing kill -INT 2, the following output is observed:

kill -INT 2
# task: Signal received: "interrupt"
# However, the signal is not propagated to `signal-test.sh`

It is critical to note that the SIGINT signal fails to reach the subprocess. This issue is particularly problematic when using Taskfile within a Docker container. Both docker stop and CTRL-C interruption during docker run attempt to send SIGINT or SIGTERM to the Taskfile process directly. The described problem prevents Docker from achieving a 'graceful exit'.

  • Task version: Task version: v3.31.0 (h1:o6iyj9gPJXxvxPi/u/l8e025PmM2BqKgtLNPS2i7hV4=)
  • Operating system: Mac
  • Experiments enabled:
@task-bot task-bot added the state: needs triage Waiting to be triaged by a maintainer. label Nov 23, 2023
@cake-monotone
Copy link
Author

How about this flows?

  1. Propagate SIGINT or SIGTERM signals directly to the currently running command.
  2. Allow the command to complete its process.
  3. If the command finishes, the taskfile should stop without triggering any further commands.

What do you think of this idea?

@Elijas
Copy link

Elijas commented Jan 14, 2024

+1 Same here - the described problem prevents Docker from achieving a 'graceful exit'.
Any solutions?

@lookitsatravis
Copy link

I'm also experiencing this issue with Docker. My work around is to start the container with -d and then I have a stop task which I run at the beginning of the stop task to try and kill the container. It's fine, but it would be better if signals propagated.

Here's what I do for anyone who is dealing with this for Docker:

# https://taskfile.dev

version: "3"

vars:
  IMAGE_NAME: my-app
  IMAGE_TAG: latest
  IMAGE: "{{.IMAGE_NAME}}:{{.IMAGE_TAG}}"

tasks:
  default:
    desc: Shows this help message
    cmds:
      - task --list-all

  build:
    desc: Build the app
    cmds:
      - docker build . -t {{.IMAGE}}

  start:
    desc: Start the app
    deps:
      - build
    cmds:
      - cmd: task stop
        ignore_error: true
      - docker run -d --rm -p 8080:8080 --name my-app {{.IMAGE}}
      - cmd: docker logs --follow my-app
        ignore_error: true

  stop:
    desc: Stop the app
    cmds:
      - docker stop my-app

@trulede
Copy link

trulede commented May 31, 2024

The signal handling is consuming these signals. You could try an experiment and comment out this line and see if it works like you expect.

Or, an alternative might be to send a different signal and see if that gets propagated. For instance, HUP as follows:

docker kill --signal=SIGHUP  task_container

and then you need a program that catches that ... basic idea here https://gobyexample.com/signals.

The actual process that Task runs seems to come into existence here and AFAICS nothing would prevent a signal propagating through (other than they are currently consumed by Task implementation).

@vendelin8
Copy link

I tried it without the signal.Notify(ch, ... line, didn't work.
Host: Arch linux x64
Image based on Alpine/Debian

@trulede
Copy link

trulede commented Nov 15, 2024

Does it (Task) forward any signals to your test script? I wonder if any signals are propagated to the running shell.

Possibly its achieved by cancelling the context of the shell:
https://github.com/mvdan/sh/blob/5919e5ab226bf67b7a98b4925012cbd0a14d1096/interp/handler.go#L86

But it would take a bit more searching to figure out; if the signals are being propagated at all, and if not, could that effect be achieved.

@vendelin8
Copy link

I think I found it out... There's a third signal that should be added for task's signal.Notify: syscall.SIGURG. I was able to overcome the issue with local code like:

signalC := make(chan os.Signal, 1)
signal.Notify(signalC, os.Interrupt, syscall.SIGTERM, syscall.SIGURG)
go func() {
	var s os.Signal
	for {
		s = <-signalC
		if s != syscall.SIGURG {
			break
		}
	}
	log.Println("signal received", s)
	// graceful shutdown...
}

@vendelin8
Copy link

@trulede
Copy link

trulede commented Nov 17, 2024

Did you change anything in Task.

As I read through the code, it seems like InterceptInterruptSignals() is delaying task from shutting down (by consuming the signal delivered to task a few times before exiting). So you implemented that part differently?

@vendelin8
Copy link

#1924

@vendelin8
Copy link

My PR didn't seem to work, but I found out about https://github.com/krallin/tini . It propagates signals to all subprocesses with -g flag. Eg.

FROM golang:alpine
RUN apk add --no-cache go-task tini
...
ENTRYPOINT ["/sbin/tini", "-g", "--"]

@trulede
Copy link

trulede commented Nov 21, 2024

Additional explanations here : https://www.kaggle.com/code/residentmario/best-practices-for-propagating-signals-on-docker

Docker doc offers specific details and examples : https://docs.docker.com/reference/dockerfile/#entrypoint

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
state: needs triage Waiting to be triaged by a maintainer.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants