Stopping systemd service before stopping/unmountig s3fs fuse connection
by dr-ing from LinuxQuestions.org on (#5KSS7)
Hello,
my application, which runs on a debian 10 server, writes/reads data from a s3 object storage but, as some parts of the software aren't 100% s3 compatible yet, not directly via the s3 api but instead over a s3fs fuse.
Since the server running the application is a stateless instance in a cloud environment, it may/will terminate at some point. Therefore, I need to make sure that the application is shutdown properly to avoid data corruption, e.g. in case of asynchronous jobs.
The problem is that in case of a shutdown, the s3 fuse is unmounted before the application is killed and therefore the application cannot finish its asynchronous file operations. I need to make sure that the s3 drive is unmounted after the application exits.
The s3fs drive is mounted with the following shell command:
Code:/usr/bin/s3fs bucket-name /mountpoint -o gid=33 -o uid=33 -o allow_other -o umask=0000 -o passwd_file=/tmp/.qbqWCbnHRBlAKxmiesZKlEpGEmerSUwCAfter this, this service unit exists: Code:mountpoint.mount loaded active mounted /mountpointThis is my service file with all the 'Wants' Statements I already tried:
Code:[Unit]
Description = My application
After=network.target
Wants=network.target
Wants=remote-fs.target
Wants=sys-fs-fuse-connections.mount
Wants=basic.target
Wants=mountpoint.mount
[Service]
ExecStart = /opt/application
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-failure
RestartForceExitStatus=SIGPIPE
KillMode=control-group
KillSignal=SIGTERM
TimeoutSec=900
[Install]
WantedBy = multi-user.targetI noticed that at the point where the application process exits, the nfs drives are still active (presumably because of remote-fs.target?), but I did not archive that the s3fs drives are still mounted.
Does anyone have any tips on how I can solve this problem?
Thanks for every answer and best regards
(Some versions if this can help:OS: Debian 10.9Kernel: 4.19.0-16-amd64Systemd: 241cloud-init: 20.2s3fs: 1.89)
my application, which runs on a debian 10 server, writes/reads data from a s3 object storage but, as some parts of the software aren't 100% s3 compatible yet, not directly via the s3 api but instead over a s3fs fuse.
Since the server running the application is a stateless instance in a cloud environment, it may/will terminate at some point. Therefore, I need to make sure that the application is shutdown properly to avoid data corruption, e.g. in case of asynchronous jobs.
The problem is that in case of a shutdown, the s3 fuse is unmounted before the application is killed and therefore the application cannot finish its asynchronous file operations. I need to make sure that the s3 drive is unmounted after the application exits.
The s3fs drive is mounted with the following shell command:
Code:/usr/bin/s3fs bucket-name /mountpoint -o gid=33 -o uid=33 -o allow_other -o umask=0000 -o passwd_file=/tmp/.qbqWCbnHRBlAKxmiesZKlEpGEmerSUwCAfter this, this service unit exists: Code:mountpoint.mount loaded active mounted /mountpointThis is my service file with all the 'Wants' Statements I already tried:
Code:[Unit]
Description = My application
After=network.target
Wants=network.target
Wants=remote-fs.target
Wants=sys-fs-fuse-connections.mount
Wants=basic.target
Wants=mountpoint.mount
[Service]
ExecStart = /opt/application
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-failure
RestartForceExitStatus=SIGPIPE
KillMode=control-group
KillSignal=SIGTERM
TimeoutSec=900
[Install]
WantedBy = multi-user.targetI noticed that at the point where the application process exits, the nfs drives are still active (presumably because of remote-fs.target?), but I did not archive that the s3fs drives are still mounted.
Does anyone have any tips on how I can solve this problem?
Thanks for every answer and best regards
(Some versions if this can help:OS: Debian 10.9Kernel: 4.19.0-16-amd64Systemd: 241cloud-init: 20.2s3fs: 1.89)