Create shell script for a cron job with hidden debug information that will be shown only when execudted inside terminal.
The idea behind this is quite simple as we need to verify if stdin (file descriptor 0), stdout (file descriptor 1) or stderr (file descriptor 2) is associated with the terminal. If not then shell script is executed as a cron job, so any debug message can be omitted.
#!/bin/bash
# cron job template with debug information
# display debug information
# only if $terminal variable is set to 1
emit() {
if [ "$terminal" -eq "1" ]; then
printf "%s\n""$*"
fi
}
# determine if standard input (file descriptor) is opened on a terminal
# and set $terminal variable accordingly
if [ -t 0 ] ; then
terminal=1
else
terminal=0
fi
emit "Script name is $(basename $0)"
emit "Script location is $(readlink -f $0)"
emit "Script is executed on $TERM"
emit "Script is executed from $PWD directory"
emit "Script is executed with $USER permissions"
if [ -n "$SUDO_USER" ]; then
emit "Script is executed using sudo by $SUDO_USER ($SUDO_UID:$SUDO_GID)"
fi
emit
emit "Script is using path defined as $PATH"
emit "Script LANG variable is $(locale | awk -F '=''/LANG=/ {print $2}')"
emit
emit "Execution start: $(date)"
# perform real operations here
emit "Execution end: $(date)"
Sample output inside terminal.
~# bash bin/cron_job_template.sh
Script name is cron_job_template.sh
Script location is /root/bin/cron_job_template.sh
Script is executed on screen
Script is executed from /root directory
Script is executed with root permissions
Script is using path defined as /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Script LANG variable is C.UTF-8
Execution start: Tue Dec 26 13:25:50 UTC 2017
Execution end: Tue Dec 26 13:25:50 UTC 2017
Compute SHA message digest of a file to verify that its contents have not been altered.
You have two possible ways to perform this operation. You can use single shasum Perl script provided by the perl package or sha1sum/sha224sum/sha256sum/sha384sum/sha512sum commands provided by the coreutils package.
coreutils utilities
coreutils package provides multiple commands to compute and verify file checksums depending on the used algorithm. sha1sum command to use SHA-1, sha224sum command to use SHA-224, sha256sum command to use SHA-256, sha384sum command to use SHA-384 and sha512sum command to use SHA-512 algorithm.
Compute checksum for debian-stretch.tar file using SHA-256 algorithm.
$ sha256sum debian-stretch.tar | tee debian-stretch.tar.sha256sum
cc59e9182464ffe338a0addd5ae2ffaa062047a87f3c78b4eca8c590fd60c67e debian-stretch.tar
Verify checksum for file or files defined in debian-stretch.tar.sha256sum file.
$ sha256sum --check debian-stretch.tar.sha256sum
debian-stretch.tar: OK
Quietly verify checksum for file or files defined in debian-stretch.tar.sha256sum file using exit code.
Failed checksum can be easily spotted and identified.
$ sha256sum --check debian-stretch.tar.sha256sum
debian-stretch.tar: FAILED
sha256sum: WARNING: 1 computed checksum did NOT match
$ echo $?
1
perl utilities
perl package provides shasum command to compute and verify file checksums. You can use it as a replacement for coreutils alternatives as it mimics behavior of the equivalent GNU utilities.
Compute checksum for debian-stretch.tar file using SHA-256 algorithm.
$ shasum --algorithm 256 debian-stretch.tar | tee debian-stretch.tar.sha256sum
cc59e9182464ffe338a0addd5ae2ffaa062047a87f3c78b4eca8c590fd60c67e debian-stretch.tar
Verify checksum for file or files defined in debian-stretch.tar.sha256sum file.
$ shasum --algorithm 256 --check debian-stretch.tar.sha256sum
debian-stretch.tar: OK
You do not need to specify algorithm when using regular ones, but it does not apply to SHA-512/224 and SHA-512/256 algorithms.
$ shasum --check debian-stretch.tar.sha256sum
debian-stretch.tar: OK
Quietly verify checksum for debian-stretch.tar file using exit code.
Failed checksum can be easily spotted and identified.
$ shasum --algorithm 256 --check debian-stretch.tar.sha256sum
debian-stretch.tar: FAILED
shasum: WARNING: 1 computed checksum did NOT match
$ echo $?
1
You can use default SHA-11 algorithm to verify file integrity or SHA-2 family algorithms like SHA-224224, SHA-256256, SHA-384384, SHA-512512. Additionally you can take adavantage of the most recent members of the SHA-2 family like SHA-512/224512224 and SHA-512/256512256.
gpg: assuming signed data in 'ejabberd_18.01-0_amd64.deb'
gpg: Signature made Fri Jan 12 09:15:15 2018 UTC
gpg: using DSA key 8ECA469419C09311
gpg: checking the trustdb
gpg: marginals needed: 3 completes needed: 1 trust model: pgp
gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: Good signature from "Process-one <contact@process-one.net>" [ultimate]
You will see the following message in those rare cases when downloaded file is corrupted.
gpg: assuming signed data in 'ejabberd_18.01-0_amd64.deb'
gpg: Signature made Fri Jan 12 09:15:15 2018 UTC
gpg: using DSA key 8ECA469419C09311
gpg: BAD signature from "Process-one <contact@process-one.net>" [ultimate]
Calculate how fast data is copied to the specified directory to determine how long it would take finish the whole process.
Create shell script that will calculate how much data was copied to specified directory in one minute and use that value to create simple prognosis for the whole hour and day.
#!/bin/bash
# calculate how fast data is copied to specified directory
# base 1000 or 1024
base=1024
if [ "$base" -eq "1000" ]; then
block_size=1M
else
block_size=1MB
fi
# usage
usage() {
echo "Usage:"
echo " $0 directory"
echo ""
}
# pretty print total amount of data using the same unit as du
pretty_print() {
amount_of_data="$1"
if [ "$base" -eq "1000" ]; then
unit="M"
else
unit="MB"
fi
if [ "$amount_of_data" -gt "1024" ]; then
amount_of_data=$(expr $amount_of_data / 1024) # gigabytes
if [ "$base" -eq "1000" ]; then
unit="G"
else
unit="GB"
fi
fi
if [ "$amount_of_data" -gt "1024" ]; then
amount_of_data=$(expr $amount_of_data / 1024) # terabytes
if [ "$base" -eq "1000" ]; then
unit="T"
else
unit="TB"
fi
fi
# print output using the same units as du
printf "%4s %2s\r" ${amount_of_data} ${unit}
}
if [ "$#" -eq "1" ] && [ -d "$1" ]; then
directory="$1"
# du params:
# use defined block size (1M/1MB),
# use apparent size,
# display only total summary
disk_usage_params="--block-size=${block_size} --apparent-size --summarize"
difference_per_minute=$((du $disk_usage_params $directory; \
sleep 1m; \
du $disk_usage_params $directory;) 2>/dev/null | \
cut -f 1 | tac | paste --serial --delimiter - | bc)
if [ "$difference_per_minute" -gt "0" ]; then
difference_per_hour=$(expr $difference_per_minute \* 60)
difference_per_day=$(expr $difference_per_hour \* 24)
current_directory_size=$(du $disk_usage_params $directory)
echo "Calculated amount data copied per minute is $(pretty_print $difference_per_minute)"
echo "---------------------------------------------------"
echo "Prognosis for a whole hour is $(pretty_print $difference_per_hour)"
echo "Prognosis for a whole day is $(pretty_print $difference_per_day)"
echo "---------------------------------------------------"
echo "Current directory size is $(pretty_print $current_directory_size)"
else
echo "The amount of data did not changed during one minute"
fi
else
usage
fi
Sample usage.
$ check.sh
Usage:
check.sh directory
Sample scenario when data is not copied to the specified directory.
$ check.sh /var/backups/
The amount of data did not changed during one minute
Sample scenario when data is copied to the specified directory over network.
$ check.sh /srv/backup
Calculated amount data copied per minute is 243 MB
---------------------------------------------------
Prognosis for a whole hour is 14 GB
Prognosis for a whole day is 341 GB
---------------------------------------------------
Current directory size is 241 GB
Hint about most interesting part of this shell script
cut (get first column), tac (swap lines), paste (create a subtraction equation) and bc (calculate equation) can be replaced by single awk command, but I like it more this way.
Create iptables firewall that will allow already established connections, incoming icmp and ssh, outgoing icmp, ntp, dns, ssh, http and https. It will also log invalid packets and those dropped ones.
# Flush rules and delete custom chains
iptables -F
iptables -X
# Define chain to allow particular source addresses
iptables -N chain-incoming-ssh
iptables -A chain-incoming-ssh -s 192.168.1.149 -j ACCEPT -m comment --comment "local access"
iptables -A chain-incoming-ssh -j NFLOG --nflog-prefix "[fw-inc-ssh]:" --nflog-group 12
iptables -A chain-incoming-ssh -j DROP
# Define chain to log and drop incoming packets
iptables -N chain-incoming-log-and-drop
iptables -A chain-incoming-log-and-drop -j NFLOG --nflog-prefix "[fw-inc-drop]:" --nflog-group 11
iptables -A chain-incoming-log-and-drop -j DROP
# Define chain to log and drop outgoing packets
iptables -N chain-outgoing-log-and-drop
iptables -A chain-outgoing-log-and-drop -j NFLOG --nflog-prefix "[fw-out-drop]:" --nflog-group 11
iptables -A chain-outgoing-log-and-drop -j DROP
# Drop invalid packets
iptables -A INPUT -m conntrack --ctstate INVALID -j chain-incoming-log-and-drop
# Accept everthing on loopback
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
# ACCEPT incoming packets for established connections
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
# Accept incoming ICMP
iptables -A INPUT -p icmp -j ACCEPT
# Accept incoming SSH
iptables -A INPUT -p tcp --dport 22 -j chain-incoming-ssh
# Accept outgoing packets for established connections
iptables -A OUTPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
# Accept outgoing DNS
iptables -A OUTPUT -p tcp --dport 53 -j ACCEPT
iptables -A OUTPUT -p udp --dport 53 -j ACCEPT
# Accept outgoing NTP
iptables -A OUTPUT -p tcp --dport 123 -j ACCEPT
iptables -A OUTPUT -p udp --dport 123 -j ACCEPT
# Accept outgoing HTTP/S
iptables -A OUTPUT -p tcp --dport 80 -j ACCEPT
iptables -A OUTPUT -p tcp --dport 443 -j ACCEPT
# Accept outgoing SSH
iptables -A OUTPUT -p tcp --dport 22 -j ACCEPT
# Accept outgoing ICMP
iptables -A OUTPUT -p icmp -j ACCEPT
# Log not accounted outgoing traffic
iptables -A OUTPUT -j chain-outgoing-log-and-drop
# Log not accounted forwarding traffic
iptables -A FORWARD -j chain-incoming-log-and-drop
# Drop everything else
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT DROP
List all firewall rules to verify that executed commands are applied as desired.
$ sudo iptables -L -v -n
Chain INPUT (policy DROP 136 packets, 7799 bytes)
pkts bytes target prot opt in out source destination
0 0 chain-incoming-log-and-drop all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate INVALID
0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
3922 1478K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
14 896 chain-incoming-ssh tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 chain-incoming-log-and-drop all -- * * 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0
2811 370K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
18 1106 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:53
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:123
1 76 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:123
3 180 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
10 936 chain-outgoing-log-and-drop all -- * * 0.0.0.0/0 0.0.0.0/0
Chain chain-incoming-log-and-drop (2 references)
pkts bytes target prot opt in out source destination
0 0 NFLOG all -- * * 0.0.0.0/0 0.0.0.0/0 nflog-prefix "[fw-inc-drop]:" nflog-group 11
0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0
Chain chain-incoming-ssh (1 references)
pkts bytes target prot opt in out source destination
2 176 ACCEPT all -- * * 192.168.1.149 0.0.0.0/0 /* local access */
12 720 NFLOG all -- * * 0.0.0.0/0 0.0.0.0/0 nflog-prefix "[fw-inc-ssh]:" nflog-group 12
12 720 DROP all -- * * 0.0.0.0/0 0.0.0.0/0
Chain chain-outgoing-log-and-drop (1 references)
pkts bytes target prot opt in out source destination
10 936 NFLOG all -- * * 0.0.0.0/0 0.0.0.0/0 nflog-prefix "[fw-out-drop]:" nflog-group 11
10 936 DROP all -- * * 0.0.0.0/0 0.0.0.0/0
Find symbolic links inside /etc/alternatives directory whose case-sensitive content contains vim keyword but not a case-sensitive /usr/share/man/ path.
milosz@milosz-XPS-13-9343:~/name$ find /etc/alternatives/ -type l -lname "*vim*" -and -not -lname "/usr/share/man/*" -exec stat --format="%N" {} \;
Reading package lists... Done
Building dependency tree
Reading state information... Done
Suggested packages:
dbus-user-session pinentry-gnome3 tor
The following NEW packages will be installed:
dirmngr
0 upgraded, 1 newly installed, 0 to remove and 1 not upgraded.
Need to get 595 kB of archives.
After this operation, 1,110 kB of additional disk space will be used.
Get:1 http://ftp.task.gda.pl/debian stretch/main amd64 dirmngr amd64 2.1.18-8~deb9u1 [595 kB]
Fetched 595 kB in 0s (1,882 kB/s)
Selecting previously unselected package dirmngr.
(Reading database ... 26571 files and directories currently installed.)
Preparing to unpack .../dirmngr_2.1.18-8~deb9u1_amd64.deb ...
Unpacking dirmngr (2.1.18-8~deb9u1) ...
Processing triggers for man-db (2.7.6.1-2) ...
Setting up dirmngr (2.1.18-8~deb9u1) ...
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
apt-transport-https
0 upgraded, 1 newly installed, 0 to remove and 1 not upgraded.
Need to get 171 kB of archives.
After this operation, 243 kB of additional disk space will be used.
Get:1 http://ftp.task.gda.pl/debian stretch/main amd64 apt-transport-https amd64 1.4.8 [171 kB]
Fetched 171 kB in 0s (831 kB/s)
Selecting previously unselected package apt-transport-https.
(Reading database ... 26565 files and directories currently installed.)
Preparing to unpack .../apt-transport-https_1.4.8_amd64.deb ...
Unpacking apt-transport-https (1.4.8) ...
Setting up apt-transport-https (1.4.8) ...
Add supplementary repository
I will add RabbitMQ repository as it is a great example for the most common usage scenario.
$ echo "deb https://dl.bintray.com/rabbitmq/debian stretch main" | sudo tee /etc/apt/sources.list.d/bintray.rabbitmq.list
Update package index to notice the missing public key.
$ sudo apt-get update
Hit:1 http://security.debian.org/debian-security stretch/updates InRelease
Ign:2 http://ftp.task.gda.pl/debian stretch InRelease
Hit:3 http://ftp.task.gda.pl/debian stretch-updates InRelease
Hit:4 http://ftp.task.gda.pl/debian stretch Release
Ign:5 https://dl.bintray.com/rabbitmq/debian stretch InRelease
Get:7 https://dl.bintray.com/rabbitmq/debian stretch Release [54.1 kB]
Get:8 https://dl.bintray.com/rabbitmq/debian stretch Release.gpg [821 B]
Ign:8 https://dl.bintray.com/rabbitmq/debian stretch Release.gpg
Get:9 https://dl.bintray.com/rabbitmq/debian stretch/main amd64 Packages [867 B]
Fetched 55.8 kB in 2s (23.4 kB/s)
Reading package lists... Done
W: GPG error: https://dl.bintray.com/rabbitmq/debian stretch Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6B73A36E6026DFCA
W: The repository 'https://dl.bintray.com/rabbitmq/debian stretch Release' is not signed.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
Release signature cannot be verified due to missing 6B73A36E6026DFCA public key.
This method will use /etc/apt/trusted.gpgGPG key public ring file. Therefore I will call it the old way. There is nothing wrong with that method, it just uses single file to store trusted public keys instead of multiple ones.
If you already know URL address for the required key then use wget or curl to download and import the public key.
gpg: /home/milosz/.gnupg/trustdb.gpg: trustdb created
gpg: key 6B73A36E6026DFCA: public key "RabbitMQ Release Signing Key <info@rabbitmq.com>" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg: imported: 1
This method will use /etc/apt/trusted.gpg.d/ directory to store GPG key public ring files. Therefore I will call it the modern way as it is easier to inspect and organize bunch of independent files. It is available since the beginning of the 2017 year if my memory serves me correctly (add TrustedParts so in the future new keyrings can just be dropped commit).
If you already know URL address for the required public key then use wget or curl to download and import it. Remember to update file permissions from 600 to 644.
gpg: keyring '/etc/apt/trusted.gpg.d/rabbit.gpg' created
gpg: key 6B73A36E6026DFCA: public key "RabbitMQ Release Signing Key <info@rabbitmq.com>" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg: imported: 1
The key in the keyring is ignored as the file is not readable by user _apt
Update GPG key public ring file permissions to 644 if you see the following error.
$ apt-get update
Ign:1 http://ftp.task.gda.pl/debian stretch InRelease
Hit:2 http://ftp.task.gda.pl/debian stretch-updates InRelease
Hit:3 http://security.debian.org/debian-security stretch/updates InRelease
Hit:4 http://ftp.task.gda.pl/debian stretch Release
Ign:5 https://dl.bintray.com/rabbitmq/debian stretch InRelease
Get:6 https://dl.bintray.com/rabbitmq/debian stretch Release [54.1 kB]
Get:7 https://dl.bintray.com/rabbitmq/debian stretch Release.gpg [821 B]
Ign:7 https://dl.bintray.com/rabbitmq/debian stretch Release.gpg
Hit:9 https://dl.bintray.com/rabbitmq/debian stretch/main amd64 Packages
Ign:9 https://dl.bintray.com/rabbitmq/debian stretch/main amd64 Packages
Get:9 https://dl.bintray.com/rabbitmq/debian stretch/main amd64 Packages [709 B]
Hit:9 https://dl.bintray.com/rabbitmq/debian stretch/main amd64 Packages
Fetched 54.9 kB in 1s (48.8 kB/s)
Reading package lists... Done
W: http://ftp.task.gda.pl/debian/dists/stretch-updates/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/rabbit.gpg are ignored as the file is not readable by user '_apt' executing apt-key.
W: http://security.debian.org/debian-security/dists/stretch/updates/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/rabbit.gpg are ignored as the file is not readable by user '_apt' executing apt-key.
W: http://ftp.task.gda.pl/debian/dists/stretch/Release.gpg: The key(s) in the keyring /etc/apt/trusted.gpg.d/rabbit.gpg are ignored as the file is not readable by user '_apt' executing apt-key.
W: https://dl.bintray.com/rabbitmq/debian/dists/stretch/Release.gpg: The key(s) in the keyring /etc/apt/trusted.gpg.d/rabbit.gpg are ignored as the file is not readable by user '_apt' executing apt-key.
W: GPG error: https://dl.bintray.com/rabbitmq/debian stretch Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6B73A36E6026DFCA
W: The repository 'https://dl.bintray.com/rabbitmq/debian stretch Release' is not signed.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
The key in the keyring is not reconized
You need to use GPG key public ring file, not GPG keybox database. That is why I prefix keyring with a gnupg-ring: scheme.
$ apt-get update
Ign:1 http://ftp.task.gda.pl/debian stretch InRelease
Hit:2 http://ftp.task.gda.pl/debian stretch-updates InRelease
Hit:3 http://security.debian.org/debian-security stretch/updates InRelease
Hit:4 http://ftp.task.gda.pl/debian stretch Release
Ign:5 https://dl.bintray.com/rabbitmq/debian stretch InRelease
Get:7 https://dl.bintray.com/rabbitmq/debian stretch Release [54.1 kB]
Get:8 https://dl.bintray.com/rabbitmq/debian stretch Release.gpg [821 B]
Ign:8 https://dl.bintray.com/rabbitmq/debian stretch Release.gpg
Get:9 https://dl.bintray.com/rabbitmq/debian stretch/main amd64 Packages [867 B]
Fetched 55.8 kB in 2s (24.8 kB/s)
Reading package lists... Done
W: GPG error: https://dl.bintray.com/rabbitmq/debian stretch Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6B73A36E6026DFCA
W: The repository 'https://dl.bintray.com/rabbitmq/debian stretch Release' is not signed.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
The following file will be not used.
$ file /etc/apt/trusted.gpg.d/rabbit.gpg
rabbit.gpg: GPG keybox database version 1, created-at Thu Jan 25 186:41:01 2018, last-maintained Thu Jan 25 16:41:01 2018
The following file will be used.
$ file /etc/apt/trusted.gpg.d/rabbit.gpg
rabbit.gpg: GPG key public ring, created Tue May 17 09:09:50 2016
apt-key is a shell script, so you can inspect and debug it to learn how it works.
There are many keyservers that can be used to get the public key like keyserver.ubuntu.org, pgp.mit.edu and a pool keys.gnupg.net.
$ host keys.gnupg.net
keys.gnupg.net is an alias for hkps.pool.sks-keyservers.net.
hkps.pool.sks-keyservers.net has address 193.224.163.43
hkps.pool.sks-keyservers.net has address 216.66.15.2
hkps.pool.sks-keyservers.net has address 18.9.60.141
hkps.pool.sks-keyservers.net has address 192.94.109.73
hkps.pool.sks-keyservers.net has address 37.191.226.104
hkps.pool.sks-keyservers.net has address 51.15.0.17
hkps.pool.sks-keyservers.net has address 193.164.133.100
hkps.pool.sks-keyservers.net has address 176.9.147.41
hkps.pool.sks-keyservers.net has IPv6 address 2606:1c00:2802::b
hkps.pool.sks-keyservers.net has IPv6 address 2a02:c205:3001:3626::1
hkps.pool.sks-keyservers.net has IPv6 address 2001:bc8:214f:200::1
hkps.pool.sks-keyservers.net has IPv6 address 2001:470:1:116::6
hkps.pool.sks-keyservers.net has IPv6 address 2001:738:0:600:216:3eff:fe02:42
This technique is very common. To be honest, it is more common than I initially thought, so I will show you how to create single shell script that will display, create or destroy temporary file system depending on the name used to execute it.
Standard utilities
Look at the standard utilies as many of these behave differently depending on the used executable name.
$ find {/bin,/sbin,/usr/bin,/usr/sbin} -type l -not -lname "*alternatives*" -and -not -lname "/*" -exec stat --format="%N" {} \; | sort
This is a shell script that will change its behaviour depending on the name it was called. Notice that the case statement is using script basename to execute different actions.
#!/bin/bash
# Perform different operation depending on the shell script name
# shell script name and real location
shell_script_name=$(basename $0)
real_shell_script_location=$(readlink -f $0)
# usage
usage() {
echo "Usage:"
echo " tmpfs-info"
echo " display information about mounted tmpfs file systems"
echo " tmpfs-create location size"
echo " create tmpfs file system using provided location and size"
echo " tmpfs-destroy location"
echo " destroy tmpfs file system at given location"
}
# display information
tmpfs-info() {
mount -t tmpfs
}
# unmount tmpfs filesystem
tmpfs-destroy() {
if [ -n "$1" ]; then
location=$(readlink -f $1)
mountpoint $location >/dev/null
if [ "$?" -eq "0" ]; then
fs=$(mount -t tmpfs | awk '$3 == "'$location'" {print $1}')
if [ "$fs" == "tmpfs" ]; then
umount $location
else
echo "error: mountpoint is not tmpfs"
fi
else
echo "error: provided location is not a mountpoint"
fi
else
echo "error: missing mountpoint"
fi
}
# create tmpfs filesystem
tmpfs-create() {
if [ -n "$1" ] && [ -n "$2" ]; then
location=$(readlink -f $1)
mountpoint $location >/dev/null
if [ "$?" -eq "1" ]; then
mount -t tmpfs -o size="$2" tmpfs "$location"
else
echo "error: provided location is already a mountpoint"
fi
else
echo "error: missing mountpoint or size"
fi
}
# display shell script name and location
echo "Executed command $shell_script_name using $real_shell_script_location shell script"
# execute particular function depending on the shell script name
case "$shell_script_name" in
"tmpfs-info")
tmpfs-info
;;
"tmpfs-create")
tmpfs-create "$1""$2"
;;
"tmpfs-destroy")
tmpfs-destroy "$1"
;;
*)
usage
;;
esac
Store the above-mentioned shell script as tmpfs.sh in /usr/bin directory.
Ensure that executable bit is set.
$ sudo chmod +x /usr/bin/tmpfs.sh
Create symbolic links equivalent to the actions defined inside case statement.
Executed command tmpfs.sh using /usr/bin/tmpfs.sh shell script
Usage:
tmpfs-info
display information about mounted tmpfs file systems
tmpfs-create location size
create tmpfs file system using provided location and size
tmpfs-destroy location
destroy tmpfs file system at given location
List mounted tmpfs file systems.
$ tmpfs-info
Executed command tmpfs-info using /usr/bin/tmpfs.sh shell script
none on /dev type tmpfs (rw,nodev,relatime,size=492k,mode=755,uid=100000,gid=100000)
tmpfs on /dev/lxd type tmpfs (rw,relatime,size=100k,mode=755)
tmpfs on /dev/.lxd-mounts type tmpfs (rw,relatime,size=100k,mode=711)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,uid=100000,gid=100000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755,uid=100000,gid=100000)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,uid=100000,gid=100000)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755,uid=100000,gid=100000)
tmpfs on /var/www/tmpfs type tmpfs (rw,nosuid,nodev,noexec,relatime,size=131072k,uid=100000,gid=100000)
Create tmpfs file system.
$ mkdir tmpfs_test
$ sudo tmpfs-create tmpfs_test 128M
List mounted tmpfs file systems to check out a new one.
$ tmpfs-info
Executed command tmpfs-info using /usr/bin/tmpfs.sh shell script
none on /dev type tmpfs (rw,nodev,relatime,size=492k,mode=755,uid=100000,gid=100000)
tmpfs on /dev/lxd type tmpfs (rw,relatime,size=100k,mode=755)
tmpfs on /dev/.lxd-mounts type tmpfs (rw,relatime,size=100k,mode=711)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,uid=100000,gid=100000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755,uid=100000,gid=100000)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,uid=100000,gid=100000)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755,uid=100000,gid=100000)
tmpfs on /var/www/tmpfs type tmpfs (rw,nosuid,nodev,noexec,relatime,size=131072k,uid=100000,gid=100000)
tmpfs on /root/tmpfs_test type tmpfs (rw,nodev,relatime,size=131072k,uid=100000,gid=100000)
Destroy previously created tmpfs file system.
$ sudo tmpfs-destroy tmpfs_test
List mounted tmpfs file systems to verify that it was removed.
$ tmpfs-info
Executed command tmpfs-info using /usr/bin/tmpfs.sh shell script
none on /dev type tmpfs (rw,nodev,relatime,size=492k,mode=755,uid=100000,gid=100000)
tmpfs on /dev/lxd type tmpfs (rw,relatime,size=100k,mode=755)
tmpfs on /dev/.lxd-mounts type tmpfs (rw,relatime,size=100k,mode=711)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,uid=100000,gid=100000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755,uid=100000,gid=100000)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,uid=100000,gid=100000)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755,uid=100000,gid=100000)
tmpfs on /var/www/tmpfs type tmpfs (rw,nosuid,nodev,noexec,relatime,size=131072k,uid=100000,gid=100000)
Delete Nextcloud bookmarks using API. Notice, data is paginated in this case, so this particar case is more interesting than regular add/remove operations.
Prerequisites
Install curl utility to perform API calls.
$ sudo apt-get install curl
Install jq utility to parse JSON files.
$ sudo apt-get install jq
Create application password
Create Nextcloud application password, do not use your regular credentials.
Delete Nextcloud bookmarks
Create nextcloud_bookmarks_del.sh shell script.
Shell script
#!/bin/bash
# Delete ALL Nextcloud bookmarks
# https://blog.sleeplessbeastie.eu/
# usage info
usage(){
echo "Usage:"
echo " $0 -r nextcloud_url -u username -p passsword"
echo ""
echo "Parameters:"
echo " -r nextcloud_url : set Nextcloud URL (required)"
echo " -u username : set username (required)"
echo " -p password : set password (required)"
echo ""
}
# parse parameters
while getopts "r:u:p:f:t" option; do
case $option in
"r")
param_nextcloud_address="${OPTARG}"
param_nextcloud_address_defined=true
;;
"u")
param_username="${OPTARG}"
param_username_defined=true
;;
"p")
param_password="${OPTARG}"
param_password_defined=true
;;
\?|:|*)
usage
exit
;;
esac
done
if [ "${param_nextcloud_address_defined}" = true ] && \
[ "${param_username_defined}" = true ] && \
[ "${param_password_defined}" = true ]; then
continue_pagination="1"
while [ "${continue_pagination}" -eq "1" ]; do
bookmarks=$(curl --silent --output - -X GET --user "${param_username}:${param_password}" --header "Accept: application/json""${param_nextcloud_address}/index.php/apps/bookmarks/public/rest/v2/bookmark" | \
jq -r '.data[].id')
if [ -z "${bookmarks}" ]; then
echo "Nextcloud bookmarks list is empty. Stopping."
continue_pagination="0"
else
for bookmark in ${bookmarks}; do
status=$(curl --silent -X DELETE --user "${param_username}:${param_password}""${param_nextcloud_address}/index.php/apps/bookmarks/public/rest/v2/bookmark/${bookmark}" |
jq -r 'select(.status != "success") | .status')
if [ -n "${status}" ]; then
echo "There was an error when deleting Nextcloud bookmark id \"${bookmark}\". Stopping."
exit 1
else
echo "Deleted Nextcloud bookmark id \"${bookmark}\""
fi
done
fi
done
else
usage
fi
Usage
Display usage information.
nextcloud_bookmarks_del.sh
Usage:
./nextcloud_bookmarks_del.sh -r nextcloud_url -u username -p passsword
Parameters:
-r nextcloud_url : set Nextcloud URL (required)
-u username : set username (required)
-p password : set password (required)
Delete bookmarks.
$ nextcloud_bookmarks_del.sh -r https://cloud.example.org/ -u milosz -p Mjdu3-kDnru-4UksA-fYs0w
Deleted Nextcloud bookmark id "735"
Deleted Nextcloud bookmark id "729"
Deleted Nextcloud bookmark id "734"
Deleted Nextcloud bookmark id "733"
Deleted Nextcloud bookmark id "732"
Deleted Nextcloud bookmark id "731"
Deleted Nextcloud bookmark id "730"
Deleted Nextcloud bookmark id "728"
Deleted Nextcloud bookmark id "727"
Deleted Nextcloud bookmark id "726"
Deleted Nextcloud bookmark id "724"
Deleted Nextcloud bookmark id "725"
Deleted Nextcloud bookmark id "723"
Deleted Nextcloud bookmark id "722"
Nextcloud bookmarks list is empty. Stopping.
Configure and enable command-line tab completion for known SSH hosts to ease day-to-day operations from personal notebook. You do need to configure or enable this on Debian based distributions, but it is good to know how to do this as you will never know when it will come in handy. This article is an extension of my previus article on how to add SSH menu to Unity launcher.
Ensure that host names and addresses are not hashed for current user from now on.
$ cat << EOF | tee -a ~/.ssh/config
Host *
HashKnownHosts no
EOF
Host *
HashKnownHosts no
Create completion configuration for ssh known hosts.
$ cat << EOF | sudo tee /etc/bash_completion.d/ssh_known_hosts_completion
#
# Bash completion for known SSH hosts
#
# get user home directory
_get_user_home_directory(){
local user_id
local user_home_directory
user_id=\$(id -u)
user_home_directory=\$(getent passwd \${user_id} | cut -d ":" -f 6)
echo "\${user_home_directory}"
}
# determine if known hosts file is hashed
is_known_hosts_hashed=""
user_home_directory=\$(_get_user_home_directory)
for ssh_config_file in "\${user_home_directory}/.ssh/config""/etc/ssh/ssh_config"; do
if [ -f "\${ssh_config_file}" ]; then
if [ -z "\${is_known_hosts_hashed}" ]; then
is_known_hosts_hashed=\$(awk '/^#/ {next} /HashKnownHosts/ {print \$2}' \${ssh_config_file})
fi
fi
done
# assume that by default known hosts file is not hashed
if [ -z "\${is_known_hosts_hashed}" ]; then
is_known_hosts_hashed="yes";
fi
# generate completion reply with compgen
_ssh_known_hosts_completion()
{
local current_argument
local ssh_known_hosts
local user_home_directory
current_argument=\${COMP_WORDS[COMP_CWORD]};
user_home_directory=\$(_get_user_home_directory)
ssh_known_hosts=""
if [ -f \${user_home_directory}/.ssh/known_hosts ]; then
ssh_known_hosts=\$(awk '/^\|/ {next} {print \$1}' \${user_home_directory}/.ssh/known_hosts | sort | uniq)
fi
COMPREPLY=(\$(compgen -W '\${ssh_known_hosts}' -- \${currrent_argument}))
}
# bind completion to ssh command only if known hosts file is not hashed
if [ "\${is_known_hosts_hashed}" == "no" ]; then
complete -F _ssh_known_hosts_completion ssh
fi
EOF
#
# Bash completion for known SSH hosts
#
# get user home directory
_get_user_home_directory(){
local user_id
local user_home_directory
user_id=$(id -u)
user_home_directory=$(getent passwd ${user_id} | cut -d ":" -f 6)
echo "${user_home_directory}"
}
# determine if known hosts file is hashed
is_known_hosts_hashed=""
user_home_directory=$(_get_user_home_directory)
for ssh_config_file in "${user_home_directory}/.ssh/config""/etc/ssh/ssh_config"; do
if [ -f "${ssh_config_file}" ]; then
if [ -z "${is_known_hosts_hashed}" ]; then
is_known_hosts_hashed=$(awk '/^#/ {next} /HashKnownHosts/ {print $2}' ${ssh_config_file})
fi
fi
done
# assume that by default known hosts file is not hashed
if [ -z "${is_known_hosts_hashed}" ]; then
is_known_hosts_hashed="yes";
fi
# generate completion reply with compgen
_ssh_known_hosts_completion()
{
local current_argument
local ssh_known_hosts
local user_home_directory
current_argument=${COMP_WORDS[COMP_CWORD]};
user_home_directory=$(_get_user_home_directory)
ssh_known_hosts=""
if [ -f ${user_home_directory}/.ssh/known_hosts ]; then
ssh_known_hosts=$(awk '/^\|/ {next} {print $1}' ${user_home_directory}/.ssh/known_hosts | sort | un
iq)
fi
COMPREPLY=($(compgen -W '${ssh_known_hosts}' -- ${currrent_argument}))
}
# bind completion to ssh command only if known hosts file is not hashed
if [ "${is_known_hosts_hashed}" == "no" ]; then
complete -F _ssh_known_hosts_completion ssh
fi
You can use much simpler shell script to take advantage of it on macOS.
$ cat << EOF | tee ~/.bashrc
# generate completion reply with compgen for known hosts (ssh)
_ssh_known_hosts()
{
local current_argument;
local ssh_known_hosts;
current_argument=\${COMP_WORDS[COMP_CWORD]};
ssh_known_hosts=\$(awk '{print \$1}' ~/.ssh/known_hosts | sort | uniq);
COMPREPLY=(\$(compgen -W '\${ssh_known_hosts}' -- \$currrent_argument ))
}
# bind completion to ssh command
complete -F _ssh_known_hosts ssh
# generate completion reply with compgen for known hosts (ssh)
_ssh_known_hosts()
{
local current_argument;
local ssh_known_hosts;
current_argument=${COMP_WORDS[COMP_CWORD]};
ssh_known_hosts=$(awk '{print $1}' ~/.ssh/known_hosts | sort | uniq);
COMPREPLY=($(compgen -W '${ssh_known_hosts}' -- $currrent_argument ))
}
# bind completion to ssh command
complete -F _ssh_known_hosts ssh
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Install ruby and nodejs.
$ sudo apt-get install ruby nodejs
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
ca-certificates fonts-lato javascript-common libffi6 libgdbm3 libgmp10 libicu57 libjs-jquery libreadline7
libruby2.3 libssl1.1 libuv1 libyaml-0-2 openssl rake readline-common ruby-did-you-mean ruby-minitest
ruby-net-telnet ruby-power-assert ruby-test-unit ruby2.3 rubygems-integration unzip zip
Suggested packages:
apache2 | lighttpd | httpd readline-doc ri ruby-dev bundler
The following NEW packages will be installed:
ca-certificates fonts-lato javascript-common libffi6 libgdbm3 libgmp10 libicu57 libjs-jquery libreadline7
libruby2.3 libssl1.1 libuv1 libyaml-0-2 nodejs openssl rake readline-common ruby ruby-did-you-mean ruby-minitest
ruby-net-telnet ruby-power-assert ruby-test-unit ruby2.3 rubygems-integration unzip zip
0 upgraded, 27 newly installed, 0 to remove and 0 not upgraded.
Need to get 20.8 MB of archives.
After this operation, 80.9 MB of additional disk space will be used.
Do you want to continue? [Y/n]
[...]
Install and configure localepurge package which provides a fancy shell script to recover disk space used by useless localizations.
Install required software
Update package index.
$ sudo apt-get update
Install localepurge utility.
$ sudo apt-get install localepurge
Initial configuration will start right away.
Perform initial configuration
You can start configuration by executing the following command.
$ sudo dpkg-reconfigure localepurge
Select locales to be kept in the system.
Decide if you want to use localepurge during package installation (dpkg operations).
Decide if you want to delete localized manual pages. This option affects dpkg operations and standalone execution as well.
Additional notes
Use /etc/locale.gen file to preselect locales to be kept the system on first configuration run.
It is up to you if you want to use localepurge during dpkg operations or as a standalone utility.
If you decided to use localepurge during dpkg operations then it will create /etc/dpkg/dpkg.cfg.d/50localepurge configuration file. The contents of this file will differ depending on whether you decided to remove or leave the localized manual pages.
$ cat /etc/dpkg/dpkg.cfg.d/50localepurge
# DO NOT MODIFY/REMOVE THIS FILE - IT IS AUTO-GENERATED
#
# To remove/disable this, run dpkg-reconfigure localepurge
# and say no to/disable the "Use dpkg --path-exclude" option.
#
# To change what patterns are affected use:
# * dpkg-reconfigure localepurge
# (to alter which locales are kept and whether manpages should
# be purged)
# * Add a dpkg config file in /etc/dpkg/dpkg.cfg.d that is read
# after this file with the necessary --path-include and
# --path-exclude options.
#
# Report faulty patterns against the localepurge package.
#
# Paths to purge
path-exclude=/usr/share/locale/*
path-exclude=/usr/share/gnome/help/*/*
path-exclude=/usr/share/doc/kde/HTML/*/*
path-exclude=/usr/share/omf/*/*-*.emf
path-exclude=/usr/share/man/*
# Paths to keep
path-include=/usr/share/locale/locale.alias
path-include=/usr/share/locale/pl/*
path-include=/usr/share/locale/pl_PL/*
path-include=/usr/share/locale/pl_PL.UTF-8/*
path-include=/usr/share/gnome/help/*/C/*
path-include=/usr/share/gnome/help/*/pl/*
path-include=/usr/share/gnome/help/*/pl_PL/*
path-include=/usr/share/gnome/help/*/pl_PL.UTF-8/*
path-include=/usr/share/doc/kde/HTML/C/*
path-include=/usr/share/doc/kde/HTML/pl/*
path-include=/usr/share/doc/kde/HTML/pl_PL/*
path-include=/usr/share/doc/kde/HTML/pl_PL.UTF-8/*
path-include=/usr/share/omf/*/*-pl.emf
path-include=/usr/share/omf/*/*-pl_PL.emf
path-include=/usr/share/omf/*/*-pl_PL.UTF-8.emf
path-include=/usr/share/omf/*/*-C.emf
path-include=/usr/share/locale/languages
path-include=/usr/share/locale/all_languages
path-include=/usr/share/locale/currency/*
path-include=/usr/share/locale/l10n/*
path-include=/usr/share/man/pl/*
path-include=/usr/share/man/pl_PL/*
path-include=/usr/share/man/pl_PL.UTF-8/*
path-include=/usr/share/man/man[0-9]/*
Basic configuration is stored in /etc/locale.nopurge.
$ cat /etc/locale.nopurge
####################################################
# This is the configuration file for localepurge(8).
####################################################
####################################################
# Uncommenting this string enables the use of dpkg's
# --path-exclude feature. In this mode, localepurge
# will configure dpkg to exclude the desired locales
# at unpack time.
#
# If enabled, the following 3 options will be
# disabled:
#
# QUICKNDIRTYCALC
# SHOWFREEDSPACE
# VERBOSE
#
# And the following option will be enabled and cannot
# be disabled (unless USE_DPKG is disabled):
#
# DONTBOTHERNEWLOCALE
#
USE_DPKG
####################################################
####################################################
# Uncommenting this string enables removal of localized
# man pages based on the configuration information for
# locale files defined below:
MANDELETE
####################################################
# Uncommenting this string causes localepurge to simply delete
# locales which have newly appeared on the system without
# bothering you about it:
DONTBOTHERNEWLOCALE
####################################################
# Uncommenting this string enables display of freed disk
# space if localepurge has purged any superfluous data:
SHOWFREEDSPACE
#####################################################
# Commenting out this string enables faster but less
# accurate calculation of freed disk space:
#QUICKNDIRTYCALC
#####################################################
# Commenting out this string disables verbose output:
#VERBOSE
#####################################################
# Following locales won't be deleted from this system
# after package installations done with apt-get(8):
pl
pl_PL
pl_PL.UTF-8
In case of emergency
Use the following command to re-create the manual page index cache.
$ sudo mandb -c
Use the following shell script after disabling localepurge to reinstall packages and fix missing locales.
There are those rare situations where you do not know the public key that is required to verify repository signatures, but want to add repository and a public key used to sign it. Hopefully, there is an easy answer to that question.
Install dirmngr using the following command to perform network operations as described here.
Reading package lists... Done
Building dependency tree
Reading state information... Done
Suggested packages:
dbus-user-session pinentry-gnome3 tor
The following NEW packages will be installed:
dirmngr
0 upgraded, 1 newly installed, 0 to remove and 1 not upgraded.
Need to get 595 kB of archives.
After this operation, 1,110 kB of additional disk space will be used.
Get:1 http://ftp.task.gda.pl/debian stretch/main amd64 dirmngr amd64 2.1.18-8~deb9u1 [595 kB]
Fetched 595 kB in 0s (1,882 kB/s)
Selecting previously unselected package dirmngr.
(Reading database ... 26571 files and directories currently installed.)
Preparing to unpack .../dirmngr_2.1.18-8~deb9u1_amd64.deb ...
Unpacking dirmngr (2.1.18-8~deb9u1) ...
Processing triggers for man-db (2.7.6.1-2) ...
Setting up dirmngr (2.1.18-8~deb9u1) ...
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
apt-transport-https
0 upgraded, 1 newly installed, 0 to remove and 1 not upgraded.
Need to get 171 kB of archives.
After this operation, 243 kB of additional disk space will be used.
Get:1 http://ftp.task.gda.pl/debian stretch/main amd64 apt-transport-https amd64 1.4.8 [171 kB]
Fetched 171 kB in 0s (831 kB/s)
Selecting previously unselected package apt-transport-https.
(Reading database ... 26565 files and directories currently installed.)
Preparing to unpack .../apt-transport-https_1.4.8_amd64.deb ...
Unpacking apt-transport-https (1.4.8) ...
Setting up apt-transport-https (1.4.8) ...
$ echo "deb https://dl.bintray.com/rabbitmq/debian stretch main" | sudo tee /etc/apt/sources.list.d/bintray.rabbitmq.list
I will use repoistory url (https://dl.bintray.com/rabbitmq/debian) and distribution (stretch) parts to build url address for the signature file and use it to display keyid.
Make iptables configuration persistent using basic system utilities or a designated boot-time loader.
Use basic system utilities
This simple solution is most suitable for a system with a single network interface.
Edit /etc/network/interfaces global configuration file or a specific interface configuration in /etc/network/interfaces.d/ directory to define pre-up and post-up options to save or load iptables configuration using /etc/firewall.rules file.
Firewall configuration will be saved before taking the interface down and restored after bringing the interface up.
Use boot-time loader for firewall rules
Install iptables-persistent package.
$ sudo apt-get install iptables-persistent
Store IPv4 iptables configuration during installation process.
Store IPv6 iptables configuration during installation process.
Use dpkg-reconfigure to execute this step later.
$ sudo dpkg-reconfigure iptables-persistent
Ensure that netfilter-persistent will be enabled at boot.
$ sudo systemctl enable netfilter-persistent
Change FLUSH_ON_STOP variable in /etc/default/netfilter-persistent default configuration file to flush firewall rules when service is stopped. It is not necessary to perform this step if you want the default behaviour.
$ cat /etc/default/netfilter-persistent
# Configuration for netfilter-persistent
# Plugins may extend this file or have their own
FLUSH_ON_STOP=0
IPv4 firewall rules are not saved automatically on system shutdown. Use the following command to update these.
$ iptables-save > /etc/iptables/rules.v4
IPv6 firewall rules are not saved automatically on system shutdown. Use the following command to update these.
$ ip6tables-save > /etc/iptables/rules.v6
Additional notes
Import iptables-persistent configuration before package installation to automate the whole process.
Perform simple incremental backup using rdiff-backup utility.
Required software
Install rdiff-backup utility.
$ sudo apt-get install rdiff-backup
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
libjs-sphinxdoc libjs-underscore librsync1 python-pylibacl python-pyxattr
Suggested packages:
python-pylibacl-dbg python-pyxattr-dbg
The following NEW packages will be installed:
libjs-sphinxdoc libjs-underscore librsync1 python-pylibacl python-pyxattr rdiff-backup
0 upgraded, 6 newly installed, 0 to remove and 28 not upgraded.
Need to get 386 kB of archives.
After this operation, 1,500 kB of additional disk space will be used.
Do you want to continue? [Y/n]
[...]
Perform incremental backup on local machine
Perform incremental backup of the blog.sleeplessbeastie.eu directory to /srv/backup/blog.sleeplessbeastie.eu, but exlude _site directory with generated content.
Using rdiff-backup version 1.2.8
POSIX ACLs not supported by filesystem at blog.sleeplessbeastie.eu
Unable to import win32security module. Windows ACLs
not supported by filesystem at blog.sleeplessbeastie.eu
escape_dos_devices not required by filesystem at blog.sleeplessbeastie.eu
-----------------------------------------------------------------
Detected abilities for source (read only) file system:
Access control lists Off
Extended attributes On
Windows access control lists Off
Case sensitivity On
Escape DOS devices Off
Escape trailing spaces Off
Mac OS X style resource forks Off
Mac OS X Finder information Off
-----------------------------------------------------------------
POSIX ACLs not supported by filesystem at /srv/backup/blog.sleeplessbeastie.eu/rdiff-backup-data/rdiff-backup.tmp.0
Unable to import win32security module. Windows ACLs
not supported by filesystem at /srv/backup/blog.sleeplessbeastie.eu/rdiff-backup-data/rdiff-backup.tmp.0
escape_dos_devices not required by filesystem at /srv/backup/blog.sleeplessbeastie.eu/rdiff-backup-data/rdiff-backup.tmp.0
-----------------------------------------------------------------
Detected abilities for destination (read/write) file system:
Ownership changing Off
Hard linking On
fsync() directories On
Directory inc permissions On
High-bit permissions On
Symlink permissions Off
Extended filenames On
Windows reserved filenames Off
Access control lists Off
Extended attributes On
Windows access control lists Off
Case sensitivity On
Escape DOS devices Off
Escape trailing spaces Off
Mac OS X style resource forks Off
Mac OS X Finder information Off
-----------------------------------------------------------------
Backup: must_escape_dos_devices = 0
Starting increment operation blog.sleeplessbeastie.eu to /srv/backup/blog.sleeplessbeastie.eu
Processing changed file .
Incrementing mirror file /srv/backup/blog.sleeplessbeastie.eu
Processing changed file _posts
Incrementing mirror file /srv/backup/blog.sleeplessbeastie.eu/_posts
Processing changed file _posts/.2018-09-12-how-to-perform-incremental-backup-using-rdiff-backup.html.swp
Incrementing mirror file /srv/backup/blog.sleeplessbeastie.eu/_posts/.2018-09-12-how-to-perform-incremental-backup-using-rdiff-backup.html.swp
Processing changed file _posts/2018-09-12-how-to-perform-incremental-backup-using-rdiff-backup.html
Incrementing mirror file /srv/backup/blog.sleeplessbeastie.eu/_posts/2018-09-12-how-to-perform-incremental-backup-using-rdiff-backup.html
--------------[ Session statistics ]--------------
StartTime 1519208194.00 (Wed Feb 21 10:16:34 2018)
EndTime 1519208196.31 (Wed Feb 21 10:16:36 2018)
ElapsedTime 2.31 (2.31 seconds)
SourceFiles 4122
SourceFileSize 321298360 (306 MB)
MirrorFiles 4122
MirrorFileSize 321298180 (306 MB)
NewFiles 0
NewFileSize 0 (0 bytes)
DeletedFiles 0
DeletedFileSize 0 (0 bytes)
ChangedFiles 4
ChangedSourceSize 13850 (13.5 KB)
ChangedMirrorSize 13670 (13.3 KB)
IncrementFiles 4
IncrementFileSize 858 (858 bytes)
TotalDestinationSizeChange 1038 (1.01 KB)
Errors 0
--------------------------------------------------
Verify current data in repository.
$ rdiff-backup --verify-at-time now /srv/backup/blog.sleeplessbeastie.eu
Found 2 increments:
increments.2018-02-21T10:13:59Z.dir Wed Feb 21 10:13:59 2018
increments.2018-02-21T10:16:34Z.dir Wed Feb 21 10:16:34 2018
Current mirror: Wed Feb 21 10:21:56 2018
List files in the most recent backup.
$ rdiff-backup --list-at-time now /srv/backup/blog.sleeplessbeastie.eu
You will need debconf-set-selections on a target system on which you want to insert new values into the debconf database. It is provided by debconf package, so it is installed by default.
$ dpkg-query -S /usr/bin/debconf-set-selections
debconf: /usr/bin/debconf-set-selections
You will need debconf-get-selections on a source system on which you want to read values from the debconf database. It is provided by debconf-utils package, so you need to install it manually.
$ dpkg-query -S /usr/bin/debconf-get-selections
debconf-utils: /usr/bin/debconf-get-selections
Install debconf-utils package.
$ sudo apt-get install debconf-utils
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
debconf-utils
0 upgraded, 1 newly installed, 0 to remove and 1 not upgraded.
Need to get 56.6 kB of archives.
After this operation, 108 kB of additional disk space will be used.
Get:1 http://ftp.task.gda.pl/debian stretch/main amd64 debconf-utils all 1.5.61 [56.6 kB]
Fetched 56.6 kB in 0s (146 kB/s)
Selecting previously unselected package debconf-utils.
(Reading database ... 28000 files and directories currently installed.)
Preparing to unpack .../debconf-utils_1.5.61_all.deb ...
Unpacking debconf-utils (1.5.61) ...
Setting up debconf-utils (1.5.61) ...
Processing triggers for man-db (2.7.6.1-2) ...
Read values from the debconf database which applies to the iptables-persistent package.
Make iptables configuration persistent using custom service file with additional features like configurable wait time, so you can safely interrupt execution and test mode that will disable firewall after defined period of time.
LSB init script
Create /etc/init.d/iptables-firewall service file. Edit firewall_start function to apply custom iptables configuration.
#!/bin/sh
### BEGIN INIT INFO
# Provides: iptables-firewall
# Required-Start: mountkernfs $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: iptables firewall
# Description: local iptables firewall
### END INIT INFO
. /lib/lsb/init-functions
# Limit PATH
PATH="/sbin:/usr/sbin:/bin:/usr/bin"
# Time (in seconds) to change your mind and stop operation
WAIT_FOR="15"
# Time (in minutes) to wait before stop action (test action)
TEST_FOR="5"
# Source configuration
if [ -f "/etc/default/iptables-firewall" ]; then
. /etc/default/iptables-firewall
fi
# iptables configuration
firewall_start() {
# Flush rules and delete custom chains
iptables -F
iptables -X
# Define chain to allow particular source addresses
iptables -N chain-incoming-ssh
iptables -A chain-incoming-ssh -s 192.168.1.149 -j ACCEPT -m comment --comment "local access"
iptables -A chain-incoming-ssh -p tcp --dport 22 -j LOG --log-prefix "[fw-inc-ssh] " -m limit --limit 6/min --limit-burst 4
iptables -A chain-incoming-ssh -j DROP
# Define chain to log and drop incoming packets
iptables -N chain-incoming-log-and-drop
iptables -A chain-incoming-log-and-drop -j LOG --log-prefix "[fw-inc-drop] " -m limit --limit 6/min --limit-burst 4
iptables -A chain-incoming-log-and-drop -j DROP
# Define chain to log and drop outgoing packets
iptables -N chain-outgoing-log-and-drop
iptables -A chain-outgoing-log-and-drop -j LOG --log-prefix "[fw-out-drop] " -m limit --limit 6/min --limit-burst 4
iptables -A chain-outgoing-log-and-drop -j DROP
# Drop invalid packets
iptables -A INPUT -m conntrack --ctstate INVALID -j chain-incoming-log-and-drop
# Accept everthing on loopback
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
# ACCEPT incoming packets for established connections
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
# Accept incoming ICMP
iptables -A INPUT -p icmp -j ACCEPT
# Accept incoming SSH
iptables -A INPUT -p tcp --dport 22 -j chain-incoming-ssh
# Accept outgoing packets for established connections
iptables -A OUTPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
# Accept outgoing DNS
iptables -A OUTPUT -p tcp --dport 53 -j ACCEPT
iptables -A OUTPUT -p udp --dport 53 -j ACCEPT
# Accept outgoing NTP
iptables -A OUTPUT -p tcp --dport 123 -j ACCEPT
iptables -A OUTPUT -p udp --dport 123 -j ACCEPT
# Accept outgoing HTTP/S
iptables -A OUTPUT -p tcp --dport 80 -j ACCEPT
iptables -A OUTPUT -p tcp --dport 443 -j ACCEPT
# Accept outgoing SSH
iptables -A OUTPUT -p tcp --dport 22 -j ACCEPT
# Accept outgoing ICMP
iptables -A OUTPUT -p icmp -j ACCEPT
# Log not accounted outgoing traffic
iptables -A OUTPUT -j chain-outgoing-log-and-drop
# Log not accounted forwarding traffic
iptables -A FORWARD -j chain-incoming-log-and-drop
# Drop everything else
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT DROP
}
# clear iptables configuration
firewall_stop() {
iptables -F
iptables -X
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
}
# internal
# check if command is available
check_for_command(){
command -v "$1" 1>/dev/null 2>&-
}
# check if commands are available
check_commands(){
commands="iptables at"
status=0
for command in $commands; do
if ! check_for_command "$command"; then
log_action_msg "Command $command is not available"
status=1
fi
done
if [ "$status" -ne "0" ]; then
log_action_begin_msg "Starting iptables firewall"
log_action_end_msg 1
exit 1
fi
}
# check if user is root
check_user_permissions() {
if [ "$(id -u)" -ne "0" ]; then
log_action_begin_msg "Insuffcient permissions"
log_action_end_msg 1
exit 1
fi
}
# wait for $WAIT_FOR seconds
wait_before_execution() {
log_action_msg "Waiting for $WAIT_FOR seconds"
sleep $WAIT_FOR
}
# check environment
check_commands
check_user_permissions
# execute action
case "$1" in
start|restart)
wait_before_execution
log_action_begin_msg "Starting firewall"
firewall_stop
firewall_start
log_action_end_msg 0
;;
stop)
wait_before_execution
log_action_begin_msg "Stopping firewall"
firewall_stop
log_action_end_msg 0
;;
test)
wait_before_execution
log_action_msg "Scheduling iptables firewall to stop in $TEST_FOR minutes"
sudo at now +2 minutes <<EOF
$(readlink -f $0) stop
EOF
if [ "$?" -gt "0" ]; then
log_action_begin_msg "Error: firewall will not stop in $TEST_FOR minutes"
log_action_end_msg 1
exit 1
fi
log_action_msg "Loading firewall rules"
firewall_stop
firewall_start
log_action_begin_msg "Testing iptables firewall"
log_action_end_msg 0
;;
show)
iptables -L -v -n
;;
show-nat)
iptables -t nat -L -v -n
;;
esac
Show current iptables configuration for nat table.
$ sudo /etc/init.d/iptables-firewall show-nat
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Test firewall service for 5 minutes.
$ sudo /etc/init.d/iptables-firewall test
[info] Waiting for 15 seconds.
[info] Scheduling iptables firewall to stop in 5 minutes.
warning: commands will be executed using /bin/sh
job 7 at Tue Jan 2 00:53:00 2018
[info] Loading firewall rules.
[ ok ] Testing iptables firewall...done.
There can be too much information in the above example, so the recursive view is not really useful in the command-line.
Use aptitude utility to display reverse package dependencies
Install a high-level interface to the package manager.
$ sudo apt-get install aptitude
Display verbose reverse dependencies for particular package located in configured repository.
$ aptitude search '~Dtmux'
p apt-dater - terminal-based remote package update manager
p byobu - text window manager, shell multiplexer, integrated De
p tmux-plugin-manager - tmux plugin manager based on git
p tmuxinator - Create and manage tmux sessions easily
Display reverse dependencies for particular package located in configured repository.
$ aptitude search -F '%p''~Dtmux'
apt-dater
byobu
tmux-plugin-manager
tmuxinator
Use apt-rdepends utility to display reverse package dependencies
Install an utility that performs recursive reverse dependency listings similar to apt-cache.
$ sudo apt-get install apt-rdepends
Display reverse dependencies for particular package located in configured repository.