Quantcast
Channel: sleeplessbeastie's notes
Viewing all 770 articles
Browse latest View live

How to make iptables configuration persistent using systemd

$
0
0

Make iptables configuration persistent using systemd file with additional possibility to disable firewall after defined period of time.

Shell script

Create /sbin/iptables-firewall.sh shell script. Edit firewall_start function to apply custom iptables configuration.

#!/bin/bash
# Configure iptables firewall

# Limit PATH
PATH="/sbin:/usr/sbin:/bin:/usr/bin"

# iptables configuration
firewall_start() {
  # Define chain to allow particular source addresses
  iptables -N chain-incoming-ssh
  iptables -A chain-incoming-ssh -s 192.168.1.149 -j ACCEPT -m comment --comment "local access"
  iptables -A chain-incoming-ssh -p tcp --dport 22 -j LOG  --log-prefix "[fw-inc-ssh] " -m limit --limit 6/min --limit-burst 4
  iptables -A chain-incoming-ssh -j DROP

  # Define chain to log and drop incoming packets
  iptables -N chain-incoming-log-and-drop
  iptables -A chain-incoming-log-and-drop -j LOG --log-prefix "[fw-inc-drop] " -m limit --limit 6/min --limit-burst 4
  iptables -A chain-incoming-log-and-drop -j DROP

  # Define chain to log and drop outgoing packets
  iptables -N chain-outgoing-log-and-drop
  iptables -A chain-outgoing-log-and-drop -j LOG --log-prefix "[fw-out-drop] " -m limit --limit 6/min --limit-burst 4
  iptables -A chain-outgoing-log-and-drop -j DROP

  # Drop invalid packets
  iptables -A INPUT -m conntrack --ctstate INVALID -j chain-incoming-log-and-drop

  # Accept everthing on loopback
  iptables -A INPUT  -i lo -j ACCEPT
  iptables -A OUTPUT -o lo -j ACCEPT

  # ACCEPT incoming packets for established connections
  iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

  # Accept incoming ICMP
  iptables -A INPUT -p icmp -j ACCEPT

  # Accept incoming SSH
  iptables -A INPUT -p tcp --dport 22 -j chain-incoming-ssh

  # Accept outgoing packets for established connections
  iptables -A OUTPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

  # Accept outgoing DNS
  iptables -A OUTPUT -p tcp --dport 53 -j ACCEPT
  iptables -A OUTPUT -p udp --dport 53 -j ACCEPT

  # Accept outgoing NTP
  iptables -A OUTPUT -p tcp --dport 123 -j ACCEPT
  iptables -A OUTPUT -p udp --dport 123 -j ACCEPT

  # Accept outgoing HTTP/S
  iptables -A OUTPUT -p tcp --dport 80  -j ACCEPT
  iptables -A OUTPUT -p tcp --dport 443 -j ACCEPT

  # Accept outgoing SSH
  iptables -A OUTPUT -p tcp --dport 22  -j ACCEPT

  # Accept outgoing ICMP
  iptables -A OUTPUT -p icmp -j ACCEPT

  # Log not accounted outgoing traffic
  iptables -A OUTPUT -j chain-outgoing-log-and-drop

  # Log not accounted forwarding traffic
  iptables -A FORWARD -j chain-incoming-log-and-drop

  # Drop everything else
  iptables -P INPUT   DROP
  iptables -P FORWARD DROP
  iptables -P OUTPUT  DROP
}

# clear iptables configuration
firewall_stop() {
  iptables -F
  iptables -X
  iptables -P INPUT   ACCEPT
  iptables -P FORWARD ACCEPT
  iptables -P OUTPUT  ACCEPT
}

# execute action
case "$1" in
  start|restart)
    echo "Starting firewall"
    firewall_stop
    firewall_start
    ;;
  stop)
    echo "Stopping firewall"
    firewall_stop
    ;;
esac

Ensure that shell script owned by root.

$ sudo chown root:root /sbin/iptables-firewall.sh

Ensure that script is executable.

$ sudo chmod 750 /sbin/iptables-firewall.sh

systemd configuration

Create iptables-firewall service.

$ cat << EOF | sudo tee /etc/systemd/system/iptables-firewall.service
[Unit]
Description=iptables firewall service
After=network.target

[Service]
Type=oneshot
ExecStart=/home/milosz/iptables-firewall.sh start
RemainAfterExit=true
ExecStop=/home/milosz/iptables-firewall.sh stop
StandardOutput=journal

[Install]
WantedBy=multi-user.target
EOF

Create iptables-firewall-test service.

$ cat << EOF | sudo tee /etc/systemd/system/iptables-firewall-test.service
[Unit]
Description=iptables firewall service test
BindsTo=iptables-firewall.service
After=iptables-firewall.service

[Service]
Type=oneshot
ExecStart=/usr/bin/systemd-run --on-active=180 --timer-property=AccuracySec=1s /bin/systemctl stop iptables-firewall.service
StandardOutput=journal

[Install]
WantedBy=multi-user.target
EOF

Reload systemd manager configuration.

$ sudo systemctl daemon-reload

Usage

Test iptables firewall using iptables-firewall-test service.

$ sudo systemctl start iptables-firewall-test

It will start iptables-firewall service.

$ sudo systemctl status iptables-firewall
● iptables-firewall.service - iptables firewall service
   Loaded: loaded (/etc/systemd/system/iptables-firewall.service; disabled; vendor preset: enabled)
   Active: active (exited) since Tue 2018-01-02 07:14:42 CST; 2min 32s ago
  Process: 772 ExecStart=/home/milosz/iptables-firewall.sh start (code=exited, status=0/SUCCESS)
 Main PID: 772 (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4915)
   CGroup: /system.slice/iptables-firewall.service

Jan 02 07:14:41 debian systemd[1]: Starting iptables firewall service...
Jan 02 07:14:41 debian iptables-firewall.sh[772]: Starting firewall
Jan 02 07:14:42 debian systemd[1]: Started iptables firewall service.

It will also schedule temporary timer to stop iptables-firewall service after 3 minutes.

$ sudo systemctl list-timers
NEXT                         LEFT          LAST                         PASSED      UNIT
Tue 2018-01-02 07:17:42 CST  2min 21s left n/a                          n/a         run-rb4c64fbca4ee4c
Tue 2018-01-02 10:58:01 CST  3h 42min left Mon 2018-01-01 18:54:12 CST  12h ago     apt-daily.timer
Wed 2018-01-03 06:29:06 CST  23h left      Tue 2018-01-02 06:10:06 CST  1h 5min ago apt-daily-upgrade.t
Wed 2018-01-03 06:59:14 CST  23h left      Tue 2018-01-02 06:59:14 CST  16min ago   systemd-tmpfiles-cl

4 timers listed.
Pass --all to see loaded but inactive timers, too.

Enable iptables-firewall service at boot.

$ sudo systemctl enable iptables-firewall

Start iptables-firewall service.

$ sudo systemctl start iptables-firewall

There is nothing more than standard systemd operations. Use iptables-firewall-test service to test iptables configuration, but remember that it will stop iptables-firewall service after 3 minutes.


How to create meta-package

$
0
0

Create simplest possible meta-package to install multiple software packages at once and quickly setup familiar environment.

Install equivs utility.

$ sudo apt-get install equivs

Create configuration file for meta-package.

$ equivs-control my-terminal-utilities

Inspect created configuration template.

$ cat my-terminal-utilities
### Commented entries have reasonable defaults.
### Uncomment to edit them.
# Source: <source package name; defaults to package name>
Section: misc
Priority: optional
# Homepage: <enter URL here; no default>
Standards-Version: 3.9.2

Package: <package name; defaults to equivs-dummy>
# Version: <enter version here; defaults to 1.0>
# Maintainer: Your Name <yourname@example.com>
# Pre-Depends: <comma-separated list of packages>
# Depends: <comma-separated list of packages>
# Recommends: <comma-separated list of packages>
# Suggests: <comma-separated list of packages>
# Provides: <comma-separated list of packages>
# Replaces: <comma-separated list of packages>
# Architecture: all
# Multi-Arch: <one of: foreign|same|allowed>
# Copyright: <copyright file; defaults to GPL2>
# Changelog: <changelog file; defaults to a generic changelog>
# Readme: <README.Debian file; defaults to a generic one>
# Extra-Files: <comma-separated list of additional files for the doc directory>
# Files: <pair of space-separated paths; First is file to include, second is destination>
#  <more pairs, if there's more than one file to include. Notice the starting space>
Description: <short description; defaults to some wise words>
 long description and info
 .
 second paragraph

Modify configuration file to reflect meta-package dependencies.

$ cat my-terminal-utilities
Section: misc
Priority: optional
Standards-Version: 3.9.2
Version: 1.0+artificial
Package: my-terminal-utilities
Depends: git, tmux, vim, ncdu, htop, curl, jq, httpie
Description: essential terminal utilities
 several useful utilities

Build meta-package.

$ equivs-build my-terminal-utilities
dh_testdir
dh_testroot
dh_prep
dh_testdir
dh_testroot
dh_install
dh_install: Compatibility levels before 9 are deprecated (level 7 in use)
dh_installdocs
dh_installdocs: Compatibility levels before 9 are deprecated (level 7 in use)
dh_installchangelogs
dh_compress
dh_fixperms
dh_installdeb
dh_installdeb: Compatibility levels before 9 are deprecated (level 7 in use)
dh_gencontrol
dh_md5sums
dh_builddeb
dpkg-deb: building package 'my-terminal-utilities' in '../my-terminal-utilities_1.0+artificial_all.deb'.

The package has been created.
Attention, the package has been created in the current directory,
not in ".." as indicated by the message above!

Install created package.

$ sudo dpkg -i my-terminal-utilities_1.0+artificial_all.deb
(Reading database ... 27051 files and directories currently installed.)
Preparing to unpack my-terminal-utilities_1.0+artificial_all.deb ...
Unpacking my-terminal-utilities (1.0+artificial) over (1.0+artificial) ...
dpkg: dependency problems prevent configuration of my-terminal-utilities:
 my-terminal-utilities depends on git; however:
  Package git is not installed.
 my-terminal-utilities depends on tmux; however:
  Package tmux is not installed.
 my-terminal-utilities depends on ncdu; however:
  Package ncdu is not installed.
 my-terminal-utilities depends on htop; however:
  Package htop is not installed.
 my-terminal-utilities depends on jq; however:
  Package jq is not installed.
 my-terminal-utilities depends on httpie; however:
  Package httpie is not installed.

dpkg: error processing package my-terminal-utilities (--install):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 my-terminal-utilities

Resolve dependencies.

$ sudo apt-get --fix-broken install
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Correcting dependencies... Done
The following additional packages will be installed:
  git git-man htop httpie jq libcurl3-gnutls liberror-perl libevent-2.0-5 libjq1 libonig4 libpopt0
  libpython-stdlib libutempter0 ncdu python python-cffi-backend python-chardet python-cryptography
  python-enum34 python-idna python-ipaddress python-minimal python-openssl python-pkg-resources
  python-pyasn1 python-pygments python-requests python-setuptools python-six python-urllib3 python2.7
  python2.7-minimal rsync tmux
Suggested packages:
  git-daemon-run | git-daemon-sysvinit git-doc git-el git-email git-gui gitk gitweb git-arch git-cvs
  git-mediawiki git-svn lsof strace python-doc python-tk python-cryptography-doc
  python-cryptography-vectors python-enum34-doc python-openssl-doc python-openssl-dbg doc-base
  ttf-bitstream-vera python-socks python-setuptools-doc python-ntlm python2.7-doc binfmt-support
The following NEW packages will be installed:
  git git-man htop httpie jq libcurl3-gnutls liberror-perl libevent-2.0-5 libjq1 libonig4 libpopt0
  libpython-stdlib libutempter0 ncdu python python-cffi-backend python-chardet python-cryptography
  python-enum34 python-idna python-ipaddress python-minimal python-openssl python-pkg-resources
  python-pyasn1 python-pygments python-requests python-setuptools python-six python-urllib3 python2.7
  python2.7-minimal rsync tmux
0 upgraded, 34 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 11.0 MB of archives.
After this operation, 49.5 MB of additional disk space will be used.
Do you want to continue? [Y/n] 
[...]
Setting up python-chardet (2.3.0-2) ...
Setting up python-cryptography (1.7.1-3) ...
Setting up python-requests (2.12.4-1) ...
Setting up httpie (0.9.8-1) ...
Setting up python-openssl (16.2.0-1) ...
Setting up my-terminal-utilities (1.0+artificial) ...
Processing triggers for libc-bin (2.24-11+deb9u1) ...

Verify that meta-package is marked as installed.

$ apt-cache policy libmissing
my-terminal-utilities:
  Installed: 1.0+artificial
  Candidate: 1.0+artificial
  Version table:
 *** 1.0+artificial 100
        100 /var/lib/dpkg/status

Verify that dependencies are installed (requires apt-rdepends package as described earlier).

$ apt-rdepends --print-state --state-follow=none my-terminal-utilities
Reading package lists... Done
Building dependency tree       
Reading state information... Done
my-terminal-utilities
  Depends: curl [Installed]
  Depends: git [Installed]
  Depends: htop [Installed]
  Depends: httpie [Installed]
  Depends: jq [Installed]
  Depends: ncdu [Installed]
  Depends: tmux [Installed]
  Depends: vim [Installed]

How to display deb package contents

$
0
0

I have already described how to list the contents of specified package using apt-file utility, but you do not need it to check the contents of downloaded package.

I will use rsyslog-pgsq package to show you how to perform different operations.

$ apt-get download  rsyslog-pgsql
Get:1 http://deb.debian.org/debian stretch/main amd64 rsyslog-pgsql amd64 8.24.0-1 [188 kB]
Fetched 188 kB in 0s (7422 kB/s)

Display information about this particular package.

$ dpkg --info rsyslog-pgsql_8.24.0-1_amd64.deb
new debian package, version 2.0.
size 187916 bytes: control archive=1724 bytes.
379 bytes,    18 lines   *  config               #!/bin/sh
498 bytes,    15 lines      control
563 bytes,     7 lines      md5sums
1118 bytes,    43 lines   *  postinst             #!/bin/sh
1459 bytes,    59 lines   *  postrm               #!/bin/sh
124 bytes,    10 lines   *  prerm                #!/bin/sh
Package: rsyslog-pgsql
Source: rsyslog
Version: 8.24.0-1
Architecture: amd64
Maintainer: Michael Biebl <biebl@debian.org>
Installed-Size: 219
Depends: libc6 (>= 2.4), libpq5, debconf (>= 0.5) | debconf-2.0, rsyslog (= 8.24.0-1), dbconfig-common, ucf
Recommends: postgresql-client
Suggests: postgresql
Section: admin
Priority: extra
Homepage: http://www.rsyslog.com/
Description: PostgreSQL output plugin for rsyslog
This plugin allows rsyslog to write syslog messages into a PostgreSQL
database.

Display deb package contents.

$ dpkg --contents rsyslog-pgsql_8.24.0-1_amd64.deb
drwxr-xr-x root/root         0 2017-01-18 22:14 ./
drwxr-xr-x root/root         0 2017-01-18 22:14 ./usr/
drwxr-xr-x root/root         0 2017-01-18 22:14 ./usr/lib/
drwxr-xr-x root/root         0 2017-01-18 22:14 ./usr/lib/x86_64-linux-gnu/
drwxr-xr-x root/root         0 2017-01-18 22:14 ./usr/lib/x86_64-linux-gnu/rsyslog/
-rw-r--r-- root/root     14360 2017-01-18 22:14 ./usr/lib/x86_64-linux-gnu/rsyslog/ompgsql.so
drwxr-xr-x root/root         0 2017-01-18 22:14 ./usr/share/
drwxr-xr-x root/root         0 2017-01-18 22:14 ./usr/share/dbconfig-common/
drwxr-xr-x root/root         0 2017-01-18 22:14 ./usr/share/dbconfig-common/data/
drwxr-xr-x root/root         0 2017-01-18 22:14 ./usr/share/dbconfig-common/data/rsyslog-pgsql/
drwxr-xr-x root/root         0 2017-01-18 22:14 ./usr/share/dbconfig-common/data/rsyslog-pgsql/install/
-rw-r--r-- root/root      1006 2017-01-18 22:14 ./usr/share/dbconfig-common/data/rsyslog-pgsql/install/pgsql
drwxr-xr-x root/root         0 2017-01-18 22:14 ./usr/share/doc/
drwxr-xr-x root/root         0 2017-01-18 22:14 ./usr/share/doc/rsyslog-pgsql/
-rw-r--r-- root/root       438 2017-01-18 22:14 ./usr/share/doc/rsyslog-pgsql/NEWS.Debian.gz
-rw-r--r-- root/root     15362 2017-01-18 22:14 ./usr/share/doc/rsyslog-pgsql/changelog.Debian.gz
-rw-r--r-- root/root    163021 2017-01-10 09:09 ./usr/share/doc/rsyslog-pgsql/changelog.gz
-rw-r--r-- root/root      4454 2017-01-18 22:14 ./usr/share/doc/rsyslog-pgsql/copyright
drwxr-xr-x root/root         0 2017-01-18 22:14 ./usr/share/rsyslog-pgsql/
-rw-r--r-- root/root       152 2017-01-18 22:14 ./usr/share/rsyslog-pgsql/rsyslog-pgsql.conf.template

Simple as that.

How to aggregate weekly data to create custom statistics

$
0
0

Recently, I have parsed logs of several applications to generate custom weekly reports. It was very interesting exercise. I created two shell scripts to illustrate the whole idea by parsing HAProxy log files, so I could remember it in the future.

Display top 404 pages

Shell script used to display top 404 pages for last three weeks for web frontend and blog, statistics backends.

#!/bin/bash
# Display weekly top 404 requests for n previous weeks

# number of previous weeks
number_of_weeks="3"

# directory to keep aggregated data
aggregated_logs_directory="/tmp/aggregated"

# application name
application="haproxy"

# application log files
log_filename="/var/log/haproxy.log*"

# date format to search for: [15/Mar/2018:
file_log_date_format="\[%d/%b/%Y:"

# file types to search for: [a-Z0-9]\+\.\(php\|html\|txt\|png\)
file_types="php html txt png"

# frontends to filter
limit_frontends="^web$"
#limit_frontends=".*"

# backends to filter
limit_backends="blog\|statistics"
#limit_backends=".*"

# Print current date
echo "Current date: $(date)"
echo

# create aggregated log directory if it is missing
if [ ! -d "${aggregated_logs_directory}" ]; then
  echo "Creating aggregated log directory \"${aggregated_logs_directory}\""
  mkdir "${aggregated_logs_directory}"
else
  echo "Using aggregated log directory \"${aggregated_logs_directory}\""
fi

# loop over previous weeks
for n_weeks_ago in $(seq 1 ${number_of_weeks}); do
  # define pretty date from/to
  loop_pretty_date_from=$(date +%d/%b/%Y --date "last monday - ${n_weeks_ago} week + 0 day")
  loop_pretty_date_to=$(date +%d/%b/%Y --date "last monday - ${n_weeks_ago} week + 6 day")

  # define machine date from/to
  loop_txt_date_from=$(date +%Y%m%d --date "last monday - ${n_weeks_ago} week + 0 day")
  loop_txt_date_to=$(date +%Y%m%d --date "last monday - ${n_weeks_ago} week + 6 day")

  # define log filename
  aggregated_log_filename="${application}_${loop_txt_date_from}-${loop_txt_date_to}.log"

  # aggregate data
  if [ ! -f "${aggregated_logs_directory}/${aggregated_log_filename}" ]; then
    echo "Creating ${aggregated_log_filename} log file to store data from ${loop_pretty_date_from} to ${loop_pretty_date_to}"
    for weekday in $(seq 0 6); do
      zgrep $(date +${file_log_date_format} --date "last monday - ${n_weeks_ago} weeks + ${weekday} days") ${log_filename} | tee -a ${aggregated_logs_directory}/${aggregated_log_filename} >/dev/null
    done
  else
    echo "Using existing ${aggregated_log_filename} log file that contain data from ${loop_pretty_date_from} to ${loop_pretty_date_to}"
  fi

  # parse data
  if [ -f "${aggregated_logs_directory}/${aggregated_log_filename}" ]; then
    echo "Parsing data from ${loop_pretty_date_from} to ${loop_pretty_date_to} (${n_weeks_ago} week/weeks ago)"

    # filter frontends
    frontends=$(awk '{if ($8 !~ ":"&& $8 !~ "~"&& !seen_arr[$8]++) print $8}' ${aggregated_logs_directory}/${aggregated_log_filename} | grep "${limit_frontends}")

    # filter backends and highlight nosrv
    backends=$(awk '{split($9,backend,"/");if ($8 !~ ":"&& !seen_arr[backend[1]]++) {if (backend[2] !~ "NOSRV" )  print backend[1]; else print "NOSRV";}}' ${aggregated_logs_directory}/${aggregated_log_filename} | grep "${limit_backends}" | sort)

    # parse each log file for top 404 pages
    for frontend in ${frontends}; do
      echo "${frontend} frontend"
      for backend in ${backends}; do
        echo "->${backend}"
        if [ "${backend}" = "NOSRV" ]; then
          not_found_list=$(grep "${frontend}\([~]\)\? ${frontend}/"  ${aggregated_logs_directory}/${aggregated_log_filename} | awk '$11 == "404" {query=substr($0,index($0,$18)); print query}' | sort  | uniq -c | sort -hr | head)
        else
          not_found_list=$(grep "${frontend}\([~]\)\? ${backend}/"  ${aggregated_logs_directory}/${aggregated_log_filename} | awk '$11 == "404" {query=substr($0,index($0,$18)); print query}' | sort  | uniq -c | sort -hr | head)
        fi

        if [ -z "$not_found_list" ]; then
          echo "  --- none ---"
        else
          echo  "$not_found_list"
        fi
      done
    done
    echo
  fi
done

Sample output.

Current date: Fri Mar 16 19:06:41 CET 2018

Creating aggregated log directory "/tmp/aggregated"
Creating haproxy_20180305-20180311.log log file to store data from 05/Mar/2018 to 11/Mar/2018
Parsing data from 05/Mar/2018 to 11/Mar/2018 (1 week/weeks ago)
web frontend
->web-blog-production
    892 "GET /wp-login.php HTTP/1.1"
    596 "GET /apple-touch-icon.png HTTP/1.1"
    560 "GET /apple-touch-icon-precomposed.png HTTP/1.1"
    470 "GET /xfavicon.png.pagespeed.ic.ITJELUENXe.png HTTP/1.1"
     74 "GET /assets/images/blog_sleeplessbeastie_eu_image.png HTTP/1.1"
     72 "GET /tags/index.php HTTP/1.0"
     72 "GET /index.php HTTP/1.0"
     66 "GET /2013/01/21/how-to-automate-mouse-and-keyboard/index.php HTTP/1.0"
     66 "GET /01/21/how-to-automate-mouse-and-keyboard/index.php HTTP/1.0"
     40 "GET /favicon.png.pagespeed.ce.I9KrGowxSl.png HTTP/1.1"
->web-statistics-production
  --- none ---

Creating haproxy_20180226-20180304.log log file to store data from 26/Feb/2018 to 04/Mar/2018
Parsing data from 26/Feb/2018 to 04/Mar/2018 (2 week/weeks ago)
web frontend
->web-blog-production
   1012 "GET /wp-login.php HTTP/1.1"
    568 "GET /apple-touch-icon.png HTTP/1.1"
    554 "GET /apple-touch-icon-precomposed.png HTTP/1.1"
    502 "GET /xfavicon.png.pagespeed.ic.ITJELUENXe.png HTTP/1.1"
     72 "GET /tags/index.php HTTP/1.0"
     72 "GET /index.php HTTP/1.0"
     72 "GET /assets/images/blog_sleeplessbeastie_eu_image.png HTTP/1.1"
     44 "GET /favicon.png.pagespeed.ce.I9KrGowxSl.png HTTP/1.1"
     26 "HEAD /apple-touch-icon-precomposed.png HTTP/1.1"
     26 "HEAD /apple-touch-icon.png HTTP/1.1"
->web-statistics-production
  --- none ---

Creating haproxy_20180219-20180225.log log file to store data from 19/Feb/2018 to 25/Feb/2018
Parsing data from 19/Feb/2018 to 25/Feb/2018 (3 week/weeks ago)
web frontend
->web-blog-production
   1068 "GET /wp-login.php HTTP/1.1"
    846 "GET /apple-touch-icon.png HTTP/1.1"
    816 "GET /apple-touch-icon-precomposed.png HTTP/1.1"
    134 "GET /xfavicon.png.pagespeed.ic.ITJELUENXe.png HTTP/1.1"
     66 "GET /tags/index.php HTTP/1.0"
     66 "GET /index.php HTTP/1.0"
     44 "GET /2013/01/21/how-to-automate-mouse-and-keyboard/index.php HTTP/1.0"
     42 "GET /01/21/how-to-automate-mouse-and-keyboard/index.php HTTP/1.0"
     40 "GET /assets/images/blog_sleeplessbeastie_eu_image.png HTTP/1.1"
     32 "HEAD /apple-touch-icon-precomposed.png HTTP/1.1"
->web-statistics-production
      4 "HEAD /https://statistics.sleeplessbeastie.eu/ HTTP/1.1"
      4 "GET /rules.abe HTTP/1.1"

Display specified file types occurence

Shell script used to display weekly statistics for specified file types occurence.

#!/bin/bash
# Display weekly statistics for several file types for n previous weeks

# diplay mode
# 1 - pretty
# 2 - regular
display_mode="1"

# number of previous weeks
number_of_weeks="3"

# directory to keep aggregated data
aggregated_logs_directory="/tmp/aggregated"

# application name
application="haproxy"

# application log files
log_filename="/var/log/haproxy.log*"

# date format to search for: [15/Mar/2018:
file_log_date_format="\[%d/%b/%Y:"

# file types to search for: [a-Z0-9]\+\.\(php\|html\|txt\|png\)
file_types="php html txt png"

# frontends to filter
limit_frontends="^web$"
#limit_frontends=".*"

# backends to filter
limit_backends="NOSRV\|blog\|statistics"
#limit_backends=".*"

# Print current date
echo "Current date: $(date)"
echo

# create aggregated log directory if it is missing
if [ ! -d "${aggregated_logs_directory}" ]; then
  if [ "${display_mode}" -eq "1" ]; then
    echo "Creating aggregated log directory \"${aggregated_logs_directory}\""
  fi
  mkdir "${aggregated_logs_directory}"
else
  if [ "${display_mode}" -eq "1" ]; then
    echo "Using aggregated log directory \"${aggregated_logs_directory}\""
  fi
fi

# loop over previous weeks
for n_weeks_ago in $(seq 1 ${number_of_weeks}); do
  # define pretty date from/to
  loop_pretty_date_from=$(date +%d/%b/%Y --date "last monday - ${n_weeks_ago} week + 0 day")
  loop_pretty_date_to=$(date +%d/%b/%Y --date "last monday - ${n_weeks_ago} week + 6 day")

  # define machine date from/to
  loop_txt_date_from=$(date +%Y%m%d --date "last monday - ${n_weeks_ago} week + 0 day")
  loop_txt_date_to=$(date +%Y%m%d --date "last monday - ${n_weeks_ago} week + 6 day")

  # define log filename
  aggregated_log_filename="${application}_${loop_txt_date_from}-${loop_txt_date_to}.log"

  # aggregate data
  if [ ! -f "${aggregated_logs_directory}/${aggregated_log_filename}" ]; then
    if [ "${display_mode}" -eq "1" ]; then
      echo "Creating ${aggregated_log_filename} log file to store data from ${loop_pretty_date_from} to ${loop_pretty_date_to}"
    fi
    for weekday in $(seq 0 6); do
      zgrep $(date +${file_log_date_format} --date "last monday - ${n_weeks_ago} weeks + ${weekday} days") ${log_filename} | tee -a ${aggregated_logs_directory}/${aggregated_log_filename} >/dev/null
    done
  else
    if [ "${display_mode}" -eq "1" ]; then
      echo "Using existing ${aggregated_log_filename} log file that contain data from ${loop_pretty_date_from} to ${loop_pretty_date_to}"
    fi
  fi

  # parse data
  if [ -f "${aggregated_logs_directory}/${aggregated_log_filename}" ]; then
    if [ "${display_mode}" -eq "1" ]; then
      echo "Parsing data from ${loop_pretty_date_from} to ${loop_pretty_date_to} (${n_weeks_ago} week/weeks ago)"
    fi

    # filter frontends
    frontends=$(awk '{if ($8 !~ ":"&& $8 !~ "~"&& !seen_arr[$8]++) print $8}' ${aggregated_logs_directory}/${aggregated_log_filename} | grep "${limit_frontends}")

    # filter backends
    #backends=$(awk '{split($9,backend,"/");if ($8 !~ ":"&& !seen_arr[backend[1]]++) print backend[1]}' ${aggregated_logs_directory}/${aggregated_log_filename} | grep "${limit_backends}")
    # highlight nosrv
    backends=$(awk '{split($9,backend,"/");if ($8 !~ ":"&& !seen_arr[backend[1]]++) {if (backend[2] !~ "NOSRV" )  print backend[1]; else print "NOSRV";}}' ${aggregated_logs_directory}/${aggregated_log_filename} | grep "${limit_backends}" | sort)

    # parse each file type/element
    for frontend in ${frontends}; do
      if [ "${display_mode}" -eq "1" ]; then
        echo "${frontend} frontend"
      fi
      for backend in ${backends}; do
        if [ "${display_mode}" -eq "1" ]; then
          echo "->${backend}"
        fi
        for element in ${file_types}; do
          if [ "${backend}" = "NOSRV" ]; then
            count=$(grep "${frontend}\([~]\)\? ${frontend}/<NOSRV>"  ${aggregated_logs_directory}/${aggregated_log_filename} | grep -c "[a-Z0-9]\+\.${element}")
          else
            # grep for frontend and frontend~ (ssl)
            count=$(grep "${frontend}\([~]\)\? ${backend}/"  ${aggregated_logs_directory}/${aggregated_log_filename} | grep -c "[a-Z0-9]\+\.${element}")
          fi
          if [ "${display_mode}" -eq "2" ]; then
            echo "${loop_pretty_date_from} - ${loop_pretty_date_to} (${n_weeks_ago} week/weeks ago) ${frontend}->${backend}: ${element} file found ${count} times"
          elif [ "${display_mode}" -eq "1" ]; then
            if [ "${count}" -gt "0" ]; then
              echo "  ${element} file found ${count} times"
            fi
          fi
        done
      done
    done
    echo
  fi
done

Sample output.

Current date: Fri Mar 16 19:27:59 CET 2018

Creating aggregated log directory "/tmp/aggregated"
Creating haproxy_20180305-20180311.log log file to store data from 05/Mar/2018 to 11/Mar/2018
Parsing data from 05/Mar/2018 to 11/Mar/2018 (1 week/weeks ago)
web frontend
->NOSRV
  php file found 2030 times
  html file found 8272 times
  txt file found 2622 times
  png file found 1044 times
->web-blog-production
  php file found 1184 times
  html file found 206 times
  txt file found 2602 times
  png file found 160770 times
->web-statistics-production
  php file found 360992 times
  html file found 608 times
  txt file found 50 times
  png file found 836 times

Creating haproxy_20180226-20180304.log log file to store data from 26/Feb/2018 to 04/Mar/2018
Parsing data from 26/Feb/2018 to 04/Mar/2018 (2 week/weeks ago)
web frontend
->NOSRV
  php file found 1822 times
  html file found 9682 times
  txt file found 2722 times
  png file found 950 times
->web-blog-production
  php file found 1276 times
  html file found 216 times
  txt file found 2604 times
  png file found 159288 times
->web-statistics-production
  php file found 269462 times
  html file found 822 times
  txt file found 52 times
  png file found 1108 times

Creating haproxy_20180219-20180225.log log file to store data from 19/Feb/2018 to 25/Feb/2018
Parsing data from 19/Feb/2018 to 25/Feb/2018 (3 week/weeks ago)
web frontend
->NOSRV
  php file found 2028 times
  html file found 10712 times
  txt file found 2956 times
  png file found 796 times
->web-blog-production
  php file found 1376 times
  html file found 360 times
  txt file found 2816 times
  png file found 166808 times
->web-statistics-production
  php file found 352380 times
  html file found 1218 times
  txt file found 98 times
  png file found 1278 times

These shell scripts are here to merely illustrate the whole idea of generating weekly reports from existing log files, so you can improve them further.

How to verify deb package contents

$
0
0

Verify package contents by hand or use simple shell script to automate this process.

I will use rsyslog-pgsq package to show you how to explain this operation.

$ apt-get download rsyslog-pgsql
Get:1 http://deb.debian.org/debian stretch/main amd64 rsyslog-pgsql amd64 8.24.0-1 [188 kB]
Fetched 188 kB in 0s (7422 kB/s)

Manual operation

Create temporary directory.

$ mkdir contents

Extract package contents to verify md5sums.

$ dpkg -x rsyslog-pgsql_8.24.0-1_amd64.deb ./contents/

Verify MD5 hashes.

$ cd contents && md5sum -c ../md5sums
usr/lib/x86_64-linux-gnu/rsyslog/ompgsql.so: OK
usr/share/dbconfig-common/data/rsyslog-pgsql/install/pgsql: OK
usr/share/doc/rsyslog-pgsql/NEWS.Debian.gz: OK
usr/share/doc/rsyslog-pgsql/changelog.Debian.gz: OK
usr/share/doc/rsyslog-pgsql/changelog.gz: OK
usr/share/doc/rsyslog-pgsql/copyright: OK
usr/share/rsyslog-pgsql/rsyslog-pgsql.conf.template: OK

Simple shell script

This shell script uses the same method as the manual one to verify package contents.

#!/bin/bash
# verify package contents


# verify that first parameter is defined
if [ ! "$#" -eq "1" ]; then 
  echo "Verify package contents"
  echo
  echo "Usage:"
  echo "  $0 package.deb"
  exit 1
fi

# verify that first parameter is a file
if [ ! -f "$1" ] && [  -d "$1" ]; then
  echo "Parameter $1 is not a file"
  exit 1
fi

# verify that first parameter is a deb package
file_type=$(file -b "$1")
if [ !  "$file_type" == "Debian binary package (format 2.0)" ]; then
  echo "Parameter $1 is not a deb package"
  exit 1
fi

# create temporary directory and a trap
temp_dir=$(mktemp -d)
trap 'rm -rf $temp_dir' EXIT

# extract package contents
dpkg -x $1 $temp_dir

# verify package contents
dpkg --ctrl-tarfile $1 | tar -x --directory $temp_dir ./md5sums
cd $temp_dir && md5sum -c $temp_dir/md5sums

exit $?
usr/lib/x86_64-linux-gnu/rsyslog/ompgsql.so: OK
usr/share/dbconfig-common/data/rsyslog-pgsql/install/pgsql: OK
usr/share/doc/rsyslog-pgsql/NEWS.Debian.gz: OK
usr/share/doc/rsyslog-pgsql/changelog.Debian.gz: OK
usr/share/doc/rsyslog-pgsql/changelog.gz: OK
usr/share/doc/rsyslog-pgsql/copyright: OK
usr/share/rsyslog-pgsql/rsyslog-pgsql.conf.template: OK

Advanced shell script

This shell script uses more advanced methods to verify package contents and does not need to create temporary directory to extract package data.

#!/bin/bash
# verify package contents


# verify that first parameter is defined
if [ ! "$#" -eq "1" ]; then 
  echo "Verify package contents"
  echo
  echo "Usage:"
  echo "  $0 package.deb"
  exit 1
fi

# verify that first parameter is a file
if [ ! -f "$1" ] && [  -d "$1" ]; then
  echo "Parameter $1 is not a file"
  exit 1
fi

# verify that first parameter is a deb package
file_type=$(file -b "$1")
if [ !  "$file_type" == "Debian binary package (format 2.0)" ]; then
  echo "Parameter $1 is not a deb package"
  exit 1
fi

# default exit code
exit_code=0

# verify package contents
dpkg --ctrl-tarfile $1 | tar -x --to-stdout ./md5sums | while read -r line; do
  md5sum_hash=$(echo $line | cut -d "" -f 1)
  md5sum_file=$(echo $line | cut -d "" -f 2)
  extracted_file_hash=$(dpkg --fsys-tarfile $1 | tar -x --to-stdout ./$md5sum_file | md5sum | cut -d "" -f 1)
  if [ "$md5sum_hash" == "$extracted_file_hash" ]; then
    echo "${md5sum_file}: OK"
  else
    echo "${md5sum_file}: BAD"
    exit_code=2
  fi
done

exit $exit_code
usr/lib/x86_64-linux-gnu/rsyslog/ompgsql.so: OK
usr/share/dbconfig-common/data/rsyslog-pgsql/install/pgsql: OK
usr/share/doc/rsyslog-pgsql/NEWS.Debian.gz: OK
usr/share/doc/rsyslog-pgsql/changelog.Debian.gz: OK
usr/share/doc/rsyslog-pgsql/changelog.gz: OK
usr/share/doc/rsyslog-pgsql/copyright: OK
usr/share/rsyslog-pgsql/rsyslog-pgsql.conf.template: OK

It is superior to the previous one, but unfortunatelly it is very slow.

How to count separate line entries inside a pipe

$
0
0

Use shell script to count separate line entries inside a pipe to see how many entries got filtered out. It can be useful in some scenarios, especially for testing.

Shell script

The complete shell script.

#!/bin/bash
# Read standard input in a pipeline, count lines/characters and pass to the next command

# variables
prefix=""
n_lines=0
n_chars=0

# define usage function
function usage {
  echo "Read standard input in a pipeline, count lines/characters and pass to the next command"
  echo "Use -p parameter to provide a prefix"
  echo "Results will be shown on standard error"
}

# parse arguments
while getopts "p:" option; do
  case $option in
    "p")
      prefix=${OPTARG}
      ;;
    esac
done

if [ "$#" -eq "0" ] || [ "$#" -eq "2" ]; then
  if [ ! -t 0 ] && [ -p /dev/stdin ]; then
    file_read=false
    until $file_read; do
      read -r -u 0 line || file_read=true
      if [  "$file_read" == false ]; then
        n_lines=$(expr $n_lines \+ 1)
        l_chars=$(printf "%s\n""$line" | wc -c)
        n_chars=$(expr $n_chars \+ $l_chars)
        printf "%s\n""$line"> /dev/stdout
      elif ([ "$file_read" == true ] && [ -n "$line" ]); then
        n_lines=$(expr $n_lines \+ 1)
        l_chars=$(printf "%s""$line" | wc -c)
        n_chars=$(expr $n_chars \+ $l_chars)
        printf "%s\n""$line"> /dev/stdout
      fi
    done
  else
    usage
    exit
  fi
fi

# set single or plural words
if [ "$n_lines" -eq "1" ]; then
  line_single_or_plural="line"
else
  line_single_or_plural="lines"
fi

if [ "$n_chars" -eq "1" ]; then
  char_single_or_plural="character"
else
  char_single_or_plural="characters"
fi

# display lines/characters count
if [ -n "$prefix" ]; then
  echo -n "${prefix}: "> /dev/stderr
fi
echo "Counted $n_lines $line_single_or_plural and $n_chars $char_single_or_plural"> /dev/stderr

Examples

Usage information.

$ pipe_count.sh 
Read standard input in a pipeline, count lines/characters and pass to the next command
Use -p parameter to provide a prefix
Results will be shown on standard error

Sample standalone usage inside a pipeline with df command.

$ df -h | pipe_count.sh -p "_df" | sed 1d | pipe_count.sh -p "_df w/o header" | awk '$5 > 10 { print }' | pipe_count.sh -p "_df filtered for usage above 10%"
_df: Counted 11 lines and 412 characters
_df w/o header: Counted 10 lines and 361 characters
tmpfs                              788M   79M  709M  11% /run
/dev/mapper/vg-root                225G   32G  183G  15% /
tmpfs                              3,9G  273M  3,6G   7% /dev/shm
/dev/sda2                          465M  363M   74M  84% /boot
_df filtered for usage above 10%: Counted 4 lines and 150 characters

Sample usage inside a pipeline with ss command.

$ ss -t -n -H | pipe_count.sh -p "_ss tcp" | grep -v 127.0.0.1 | pipe_count.sh -p "_ss w/o localhost" | grep 443 | pipe_count.sh -p "_ss filtered for port 443"
_ss tcp: Counted 27 lines and 1211 characters
_ss w/o localhost: Counted 13 lines and 623 characters
ESTAB      0      0      192.168.1.177:59754              216.58.209.46:443
ESTAB      0      0      192.168.1.177:56682              46.165.244.206:443
_ss filtered for port 443: Counted 2 lines and 97 characters

Additional notes

I presume that /dev/stderr exists. I do not check if it is available, but maybe I should verify this specific situation.

How to resolve hostname to IP address within a shell script

$
0
0

Create straightforward shell script to resolve hostname to IPv4/IPv6 address and reuse parts of it for other projects.

Shell script

Store the following self-explanatory shell script.

#!/bin/bash
# Resolve hostname to IP address
# https://blog.sleeplessbeastie.eu/2018/10/29/how-to-resolve-hostname-to-ip-address-within-a-shell-script/

# define local or remote DNS server
dns_server="208.67.222.123"

# function to get IP address
function get_ip {
  ip_address=""
  if [ -n "$1" ]; then
    hostname="${1}"
    if [ -z "$dns_server" ]; then
      # use primary Google DNS server if it is not provided
      dns_server="8.8.8.8"
    fi

    if [ -z "query_type" ]; then
      # query A record for IPv4  by default, use AAAA for IPv6
      query_type="A"
    fi
    # check
    host -t ${query_type}  ${hostname} &>/dev/null ${dns_server}
    if [ "$?" -eq "0" ]; then
      # get
      ip_address="$(host -t ${query_type} ${hostname} ${dns_server} | awk '/has.*address/ {print $NF; exit}')"
    else
      exit 1
    fi
  else
    exit 2
  fi
  echo $ip_address
}

# main
hostname="${1}"
for query in "A-IPv4""AAAA-IPv6"; do
  query_type="$(echo $query | cut -d- -f 1)"
  ipversion="$(echo $query | cut -d- -f 2)"

  address="$(get_ip ${hostname})"
  if [ "$?" -eq "0" ]; then
    if [ -n "${address}" ]; then
      echo "Host ${hostname} has ${ipversion} address $address"
    fi
  else
    echo "There was some kind of error"
  fi
done

Sample usage

IPv4/IPv6 address for sleeplessbeastie.eu

$ getip.sh sleeplessbeastie.eu
Host sleeplessbeastie.eu has IPv4 address 84.16.240.28
Host sleeplessbeastie.eu has IPv6 address 2a00:c98:2060:a000:1:0:ca7:a

IPv4/IPv6 address for blog.sleeplessbeastie.eu

$ getip.sh blog.sleeplessbeastie.eu
Host blog.sleeplessbeastie.eu has IPv4 address 84.16.240.28
Host blog.sleeplessbeastie.eu has IPv6 address 2a00:c98:2060:a000:1:0:ca7:b

IPv4/IPv6 address for debian.org

$ getip.sh debian.org
Host debian.org has IPv4 address 149.20.4.15
Host debian.org has IPv6 address 2001:67c:2564:a119::148:14

IPv4/IPv6 address for kali.org

$ getip.sh kali.org
Host kali.org has IPv4 address 192.124.249.10

How to create artifical package to circumvent dependency checking

$
0
0

Create simple artifical package to work around missing dependencies,

Install equivs utility.

$ sudo apt-get install equivs

Create configuration file for artifical package.

$ equivs-control libmissing

Inspect created configuration template.

$ cat libmissing
### Commented entries have reasonable defaults.
### Uncomment to edit them.
# Source: <source package name; defaults to package name>
Section: misc
Priority: optional
# Homepage: <enter URL here; no default>
Standards-Version: 3.9.2

Package: <package name; defaults to equivs-dummy>
# Version: <enter version here; defaults to 1.0>
# Maintainer: Your Name <yourname@example.com>
# Pre-Depends: <comma-separated list of packages>
# Depends: <comma-separated list of packages>
# Recommends: <comma-separated list of packages>
# Suggests: <comma-separated list of packages>
# Provides: <comma-separated list of packages>
# Replaces: <comma-separated list of packages>
# Architecture: all
# Multi-Arch: <one of: foreign|same|allowed>
# Copyright: <copyright file; defaults to GPL2>
# Changelog: <changelog file; defaults to a generic changelog>
# Readme: <README.Debian file; defaults to a generic one>
# Extra-Files: <comma-separated list of additional files for the doc directory>
# Files: <pair of space-separated paths; First is file to include, second is destination>
#  <more pairs, if there's more than one file to include. Notice the starting space>
Description: <short description; defaults to some wise words>
 long description and info
 .
 second paragraph

Modify configuration file to reflect artifical package data.

$ cat libmissing
Section: misc
Priority: optional
Standards-Version: 3.9.2

Package: libmissing
Version: 1.1.0+artificial
Description: lib missing
 satisfy dependency for libmissing

Build artificial package.

$ equivs-build libmissing
dh_testdir
dh_testroot
dh_prep
dh_testdir
dh_testroot
dh_install
dh_install: Compatibility levels before 9 are deprecated (level 7 in use)
dh_installdocs
dh_installdocs: Compatibility levels before 9 are deprecated (level 7 in use)
dh_installchangelogs
dh_compress
dh_fixperms
dh_installdeb
dh_installdeb: Compatibility levels before 9 are deprecated (level 7 in use)
dh_gencontrol
dh_md5sums
dh_builddeb
dpkg-deb: building package 'libmissing' in '../libmissing_1.1.0+artificial_all.deb'.

The package has been created.
Attention, the package has been created in the current directory,
not in ".." as indicated by the message above!

Install created package.

$ sudo dpkg -i libmissing_1.1.0+artificial_all.deb
Selecting previously unselected package libmissing.
(Reading database ... 27043 files and directories currently installed.)
Preparing to unpack libmissing_1.1.0+artificial_all.deb ...
Unpacking libmissing (1.1.0+artificial) ...
Setting up libmissing (1.1.0+artificial) ...

Package will be marked as installed.

$ apt-cache policy libmissing
libmissing:
  Installed: 1.1.0+artificial
  Candidate: 1.1.0+artificial
  Version table:
 *** 1.1.0+artificial 100
        100 /var/lib/dpkg/status

Additional information

Template file used by equivs-control is located in /usr/share/equivs/template.ctl.


How to encrypt portable external hard drive

$
0
0

Encrypt portable external hard drive using Linux Unified Key Setup to protect data in transit.

Connect new and empty portable external hard drive to identify it.

[Mon Mar 19 04:20:11 2018] usb 3-2: new SuperSpeed USB device number 6 using xhci_hcd
[Mon Mar 19 04:20:11 2018] usb 3-2: New USB device found, idVendor=125f, idProduct=a35a
[Mon Mar 19 04:20:11 2018] usb 3-2: New USB device strings: Mfr=2, Product=3, SerialNumber=1
[Mon Mar 19 04:20:11 2018] usb 3-2: Product: HD650
[Mon Mar 19 04:20:11 2018] usb 3-2: Manufacturer: ADATA
[Mon Mar 19 04:20:11 2018] usb 3-2: SerialNumber: 4810358C3023
[Mon Mar 19 04:20:11 2018] scsi host4: uas
[Mon Mar 19 04:20:11 2018] scsi 4:0:0:0: Direct-Access     ADATA    HD650            0    PQ: 0 ANSI: 6
[Mon Mar 19 04:20:11 2018] sd 4:0:0:0: Attached scsi generic sg1 type 0
[Mon Mar 19 04:20:11 2018] sd 4:0:0:0: [sdb] Spinning up disk...
[Mon Mar 19 04:20:12 2018] .
[Mon Mar 19 04:20:13 2018] .
[Mon Mar 19 04:20:14 2018] .
[Mon Mar 19 04:20:15 2018] .
[Mon Mar 19 04:20:15 2018] ready
[Mon Mar 19 04:20:15 2018] sd 4:0:0:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.82 TiB)
[Mon Mar 19 04:20:15 2018] sd 4:0:0:0: [sdb] 4096-byte physical blocks
[Mon Mar 19 04:20:15 2018] sd 4:0:0:0: [sdb] Write Protect is off
[Mon Mar 19 04:20:15 2018] sd 4:0:0:0: [sdb] Mode Sense: 43 00 00 00
[Mon Mar 19 04:20:15 2018] sd 4:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[Mon Mar 19 04:20:15 2018]  sdb: sdb1
[Mon Mar 19 04:20:15 2018] sd 4:0:0:0: [sdb] Attached SCSI disk

It will likely W95 FAT32 filesystem by default.

$ sudo sfdisk --list /dev/sdb
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 33553920 bytes
Disklabel type: dos
Disk identifier: 0xf7316823

Device     Boot Start        End    Sectors  Size Id Type
/dev/sdb1  *     2048 3907026943 3907024896  1.8T  c W95 FAT32 (LBA)

Unmount if it was mounted automatically.

$ mount | grep sdb
/dev/sdb1 on /media/milosz/ADATA HD650 type vfat (rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,showexec,utf8,flush,errors=remount-ro,uhelper=udisks2)
$ sudo umount /dev/sdb1 

Initialize LUKS partition and set password.

$ sudo cryptsetup luksFormat --cipher aes-xts-plain64 --key-size 256 --hash sha256 /dev/sdb1 

WARNING!
========
This will overwrite data on /dev/sdb1 irrevocably.

Are you sure? (Type uppercase yes): YES
Enter passphrase:  ****************
Verify passphrase: ****************

Diplay header information of LUKS partition.

$ sudo cryptsetup luksDump /dev/sdb1 
LUKS header information for /dev/sdb1

Version:       	1
Cipher name:   	aes
Cipher mode:   	xts-plain64
Hash spec:     	sha256
Payload offset:	65535
MK bits:       	256
MK digest:     	67 fe f5 dc 74 de fa 82 7a 19 67 cd a2 e3 41 61 94 bc 34 3f 
MK salt:       	a8 63 0b 89 26 16 9b 05 4d aa 19 dd a7 7c dd 6d 
               	d8 32 4d 1e c4 bd fd 50 0c 5b f8 6f c4 cd e4 e6 
MK iterations: 	84500
UUID:          	780554cb-5335-4dc0-80fc-43e7bb4cf16c

Key Slot 0: ENABLED
	Iterations:         	343163
	Salt:               	83 10 7e 0c d5 60 3e 2a 72 2f 44 fd 6c 47 93 d2 
	                      	ab e7 46 61 4a 26 62 5e a8 4e 6a a1 fb 62 95 d3 
	Key material offset:	8
	AF stripes:            	4000
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED

Open LUKS partition and map it as homeext after successful verification.

$ sudo cryptsetup luksOpen /dev/sdb1 homeext
Enter passphrase for /dev/sdb1: ****************

Format encrypted homext virtual block device.

$ sudo mkfs.ext4 /dev/mapper/homeext 
mke2fs 1.42.13 (17-May-2015)
/dev/mapper/homeext contains a ext4 file system
	created on Wed Mar 28 20:03:45 2018
Proceed anyway? (y,n) y
Creating filesystem with 488369920 4k blocks and 122093568 inodes
Filesystem UUID: 68100a88-4049-427d-ba0d-85ab54c936bd
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
	102400000, 214990848

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done       

Create mount directory and mount virtual block device.

$ sudo mkdir /mnt/homeext
$ sudo mount /dev/mapper/homeext /mnt/homeext/
$ sudo chown milosz:milosz /mnt/homeext
$ sudo chmod 770 /mnt/homeext/

Unmount and close virtual block device after required data is copied.

$ sudo umount /mnt/homeext 
$ sudo cryptsetup luksClose homeext

Use luksOpen,mount and unmount, luksClose operations next time.

You can benchmark available algorithms using the following command.

$ cryptsetup benchmark
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1      1052787 iterations per second
PBKDF2-sha256     688041 iterations per second
PBKDF2-sha512     578046 iterations per second
PBKDF2-ripemd160  651289 iterations per second
PBKDF2-whirlpool  227555 iterations per second
#  Algorithm | Key |  Encryption |  Decryption
     aes-cbc   128b   592.2 MiB/s  2362.0 MiB/s
 serpent-cbc   128b    79.5 MiB/s   507.3 MiB/s
 twofish-cbc   128b   164.9 MiB/s   320.2 MiB/s
     aes-cbc   256b   438.2 MiB/s  1899.4 MiB/s
 serpent-cbc   256b    79.9 MiB/s   492.6 MiB/s
 twofish-cbc   256b   168.7 MiB/s   319.9 MiB/s
     aes-xts   256b  1425.9 MiB/s  1419.8 MiB/s
 serpent-xts   256b   498.3 MiB/s   486.5 MiB/s
 twofish-xts   256b   294.9 MiB/s   311.0 MiB/s
     aes-xts   512b  1231.8 MiB/s  1157.3 MiB/s
 serpent-xts   512b   496.8 MiB/s   484.4 MiB/s
 twofish-xts   512b   312.3 MiB/s   319.1 MiB/s

How to display processes using the most CPU or memory

$
0
0

Display processes using the most CPU or memory using nothing more than basic Linux utilities.

Display processes using the most CPU

Display top ten processes using the most CPU including additional information like exact date when process was started and how much cputime it has already used.

$ ps -ax -opid,lstart,pcpu,cputime,command --sort=-%cpu,-cputime | head -11
  PID                  STARTED %CPU     TIME COMMAND
  199 Thu Feb  1 15:44:40 2018  0.6 00:46:59 [md1_raid1]
22111 Tue Feb  6 00:02:53 2018  0.2 00:02:17 /usr/bin/ruby /usr/bin/jekyll serve --future --port 3000 --host 10.66.91.53
24551 Tue Feb  6 00:06:44 2018  0.2 00:01:50 /usr/lib/x86_64-linux-gnu/icinga2/sbin/icinga2 --no-stack-rlimit daemon -e /var/log/icinga2/error.log
28123 Tue Feb  6 11:42:55 2018  0.2 00:00:21 top
 2113 Thu Feb  1 15:44:46 2018  0.1 00:12:07 [z_wr_iss]
    7 Thu Feb  1 15:44:38 2018  0.1 00:09:04 [rcu_sched]
24620 Tue Feb  6 00:06:45 2018  0.1 00:00:52 postgres: 9.6/main: icinga_ido icinga_ido 10.66.91.37(55272) idle in transaction
 8689 Thu Feb  1 15:45:33 2018  0.0 00:06:04 /usr/lib/x86_64-linux-gnu/icinga2/sbin/icinga2 --no-stack-rlimit daemon -e /var/log/icinga2/error.log
 8454 Thu Feb  1 15:45:31 2018  0.0 00:05:57 /usr/lib/x86_64-linux-gnu/icinga2/sbin/icinga2 --no-stack-rlimit daemon -e /var/log/icinga2/error.log
 7306 Thu Feb  1 15:45:19 2018  0.0 00:05:50 /usr/lib/x86_64-linux-gnu/icinga2/sbin/icinga2 --no-stack-rlimit daemon -d -e /var/log/icinga2/icinga2.err

Processes using the most memory

Display top ten processes using the most memory including additional information like exact date when process was started and how much cputime it has already used.

$ ps -ax -opid,lstart,pmem,rss,command --sort=-pmem,-rss | head -11
  PID                  STARTED %MEM   RSS COMMAND
 6038 Mon Feb  5 23:06:45 2018  1.0 21464 postgres: 9.6/main: icinga_ido icinga_ido 10.66.91.37(55272) idle in transaction
   94 Thu Feb  1 14:45:05 2018  0.8 18772 /usr/lib/postgresql/9.6/bin/postgres -D /var/lib/postgresql/9.6/main -c config_file=/etc/postgresql/9.6/main/postgresql.conf
  438 Thu Feb  1 14:45:15 2018  0.8 17188 /usr/lib/x86_64-linux-gnu/icinga2/sbin/icinga2 --no-stack-rlimit daemon -e /var/log/icinga2/error.log
   37 Thu Feb  1 14:44:59 2018  0.7 15936 /lib/systemd/systemd-journald
  104 Thu Feb  1 14:45:09 2018  0.7 15284 postgres: 9.6/main: checkpointer process
  106 Thu Feb  1 14:45:09 2018  0.3  7476 postgres: 9.6/main: wal writer process
  107 Thu Feb  1 14:45:09 2018  0.2  4828 postgres: 9.6/main: autovacuum launcher process
    1 Thu Feb  1 14:44:58 2018  0.2  4788 /sbin/init
  464 Thu Feb  1 14:45:15 2018  0.2  4640 /usr/lib/x86_64-linux-gnu/icinga2/sbin/icinga2 --no-stack-rlimit daemon -e /var/log/icinga2/error.log
  108 Thu Feb  1 14:45:09 2018  0.1  3540 postgres: 9.6/main: stats collector process

Processes using more then specified percentage of CPU and memory

Display processes using at least 10% of available memory and more than 5% of CPU.

$ ps -ax --no-headers -opid,pmem,pcpu,command | \
  awk '$2&ht;=10 && $4>5 {printf "Process %-6s is using %6s%% mem and %6s%% cpu -- %s\n",$1,$2,$3,substr($0, index($0,$4)) }'
Process 24199  is using    10.1% mem and    8.0% cpu -- php-fpm: master process (/etc/php/7.0/fpm/php-fpm.conf)
Process 24620  is using    20.1% mem and    5.1% cpu -- postgres: 9.6/main: icinga_ido icinga_ido 10.66.91.37(55272) idle in transaction

Display processes using at least 20% of available memory and more than 10% of CPU.

$ ps -ax -opid,pmem,rss,pcpu,cputime,command | \
  awk 'NR==1 { print $1 "\t" $2 "\t" $3 "\t" $4 "\t" $5 "\t\t" $6 } NR>1 && $2>=20 && $4>10 {print $1 "\t" $2 "\t" $3 "\t" $4 "\t" $5 "\t" substr($0, index($0,$6)) }'
PID	%MEM	RSS	%CPU	TIME		COMMAND
6038	20.0	414640	10.1	00:00:52	postgres: 9.6/main: icinga_ido icinga_ido 10.66.91.37(55272) idle in transaction

How to extract rpm package on macOS

$
0
0

Extract RPM package on macOS to access its contents, which is especially useful for some old source packages.

Download installation script for the missing package manager for macOS.

$ curl --silent --fail --location  https://raw.githubusercontent.com/Homebrew/install/master/install -o brew-install 

Execute and complete installation process.

$ ruby brew-install

Install rpm2cpio utility.

$ brew install rpm2cpio
Updating Homebrew...
==> Auto-updated Homebrew!
Updated 2 taps (caskroom/cask, homebrew/core).
==> New Formulae
go-jira                                                   htslib                                                    jabba                                                     keystone
==> Updated Formulae
awscli ✔             aces_container       bmake                docfx                frugal               ibex                 libphonenumber       mdds                 node-build           sbcl                 xctool
faac ✔               angular-cli          clojure              doctl                geckodriver          icoutils             libpqxx              media-info           nomad                smali                xrootd
imagemagick ✔        apache-geode         cockroach            dub                  geoipupdate          imagemagick@6        libtensorflow        mercurial            opencv               sqlmap               xtensor
openssl ✔            apr-util             codemod              erlang               gjs                  iperf3               libvirt              mg                   osm2pgrouting        statik               yarn
openssl@1.1 ✔        armadillo            conan                expat                gnu-units            ironcli              libxml2              micropython          pdfpc                svgo
pandoc ✔             at-spi2-atk          conjure-up           file-roller          gomplate             jena                 lwtools              minimal-racket       pipenv               tfenv
pyqt ✔               at-spi2-core         consul-template      firebase-cli         gradle               jfrog-cli-go         mailutils            mkdocs               poco                 tippecanoe
sip ✔                atlassian-cli        crowdin              flow                 gucharmap            knot                 mairix               mkvalidator          prest                vagrant-completion
terminal-notifier ✔  bacula-fd            dbus                 fluent-bit           gutenberg            kompose              mariadb              mongoose             re2                  vault-cli
vim ✔                bash-preexec         dcos-cli             fonttools            highlight            libass               mariadb@10.0         mvnvm                redex                vte3
abcmidi              bibtexconv           dmd                  freetds              hyperscan            libhttpseverywhere   maxima               node                 resty                wireguard-tools

==> Installing dependencies for rpm2cpio: xz
==> Installing rpm2cpio dependency: xz
==> Downloading https://homebrew.bintray.com/bottles/xz-5.2.3.sierra.bottle.tar.gz
==> Downloading from https://akamai.bintray.com/25/2518e5105c2b290755cda0fd5cd7f71eea4cd4741b70c48250eed1750c3a6814
######################################################################## 100.0%
==> Pouring xz-5.2.3.sierra.bottle.tar.gz
🍺  /usr/local/Cellar/xz/5.2.3: 92 files, 1.4MB
==> Installing rpm2cpio
==> Downloading https://homebrew.bintray.com/bottles/rpm2cpio-1.3.sierra.bottle.tar.gz
######################################################################## 100.0%
==> Pouring rpm2cpio-1.3.sierra.bottle.tar.gz
🍺  /usr/local/Cellar/rpm2cpio/1.3: 2 files, 3.7K

Extract RPM package.

$ rpm2cpio.pl freeswitch-1.6.17-7.mga6.src.rpm | cpio -idmv
check_fs.cfg
check_fs_registered
communicator_semi_6000_20080321.tar.gz
freeswitch-1.2.12-dkms-skypopen.patch
freeswitch-1.2.12-mod_skypopen.patch
freeswitch-1.2.13-mod_shout-ltinfo.patch
freeswitch-1.2.13-python.patch
freeswitch-1.2.13-tinfo.patch
freeswitch-1.2.13-writestring.patch
freeswitch-1.2.3-ac_config.diff
freeswitch-1.2.3-fix-str-fmt.patch
freeswitch-1.2.3-link.patch
freeswitch-1.2.3-mod_ha_cluster.patch
freeswitch-1.4.14-mod_nibblebill-legb-hangup.diff
freeswitch-1.4.15-openssl-1.0.2.patch
freeswitch-1.4.4-gcc491-configure-lame.patch
freeswitch-1.4.7-no-pedantic-perl.patch
freeswitch-1.4.7-pgsql-build.diff
freeswitch-1.6.17-armv7hl-abi.patch
freeswitch-1.6.17.tar.xz
freeswitch-1.6.8-mga-stop-downloading-sounds.patch
freeswitch-contrib-master.tar.bz2
freeswitch-mod_ha_cluster-gcc48.patch
freeswitch-tmpfiles.conf
freeswitch.service
freeswitch.spec
perl-gcc-pedantic-define-working.diff
pocketsphinx-0.8.tar.gz
sphinxbase-0.8.tar.gz
186947 blocks

Done.

How to display output of multiple commands using columns

$
0
0

Use paste utility to merge lines using tab character as delimiter, expand to convert tabs to spaces and column to format input into multiple columns.

unifi controller
Two columns displaying output of multiple commands

Display some basic system information and some random Debian related epigram.

$ paste <(printf "Hostname: $(hostname)\nDate: $(LC_ALL=C date)"; printf "\n\n"; printf "Memory:\n"; head -4 /proc/meminfo) \
        <(printf "\n\n\n";LC_ALL=C df -h -x tmpfs -x devtmpfs) \
        <(printf "\n";fortune debian-hints | expand -t 8)  | \
  column -s $'\t' -tne
Hostname: milosz-XPS-13-9343                                                                       
Date: Tue Feb  6 21:39:04 CET 2018                                                                 Debian Hint #13: If you don't like the default options used in a Debian
                                                                                                   package, you can download the source and build a version which uses the
Memory:                             Filesystem                   Size  Used Avail Use% Mounted on  options you prefer. See http://www.debian.org/doc/FAQ/ch-pkg_basics.html
MemTotal:        8059088 kB         /dev/mapper/ubuntu--vg-root  226G   90G  126G  42% /           (sections 6.13 and 6.14) for more information.
MemFree:         1338300 kB         /dev/sda2                    473M  336M  114M  75% /boot       
MemAvailable:    3353428 kB         /dev/sda1                    511M  4.7M  507M   1% /boot/efi   However, bear in mind that most options in most packages can be configured
Buffers:          233632 kB                                                                        at runtime, and do not require recompiling the package.

Display some basic system information and a nice ascii art.

$ paste <(fortune mario.arteascii | expand -t 8) \
        <(printf "Hostname: $(hostname)\nDate: $(LC_ALL=C date)"; printf "\n\n"; printf "Memory:\n"; head -4 /proc/meminfo; printf "\n\n";LC_ALL=C df -h -x tmpfs -x devtmpfs) | \
  column -s $'\t' -tne
                                             Hostname: milosz-XPS-13-9343
             ..ooooox.                       Date: Tue Feb  6 22:22:49 CET 2018
        ..ooo@@@XXX%xx..                     
     ..oo@@XXX&x&xxx...                      Memory:
   .o@XX%%xx..                   ...         MemTotal:        8059088 kB
   o@X%x.                    ..oXXXoooo.     MemFree:          404848 kB
 .@X%x.                  ..o@^^      ^^@X.   MemAvailable:    2913868 kB
.ooo@@@@@@@ooo..      ..o@@^           @X%   Buffers:          242820 kB
o@@^^^     ^^^@@@ooo.oo@@^   *          %x.  
.o@          *   ^.  .                  .%^  
 .o.            .o@o o@.  .. oox.     .X%.   Filesystem                   Size  Used Avail Use% Mounted on
 .@x     .    %xxX@@   @o.     ^^Xxxx@@@^    /dev/mapper/ubuntu--vg-root  226G   90G  125G  42% /
  .@  .xxx     .o@@^   ^@%%o..   .XX@@^      /dev/sda2                    473M  336M  114M  75% /boot
   .@@XX%      .x%@   .  @xxo^..             /dev/sda1                    511M  4.7M  507M   1% /boot/efi
     ^XX%%^  .@@X %  XXx @@^x%..             
            .o@@XX%xx.. ^^XX%%xx...          
           .oXXxx...                         
           .ooX@@@xxoo..                     
          .x%. .xxx.@Xx..                    
          .x^  ^X^ .o@Xx.                    
           ^    '   .o@x                     
           '          ^.                     
kalev kaarna - benkalev at ut.ee             

There are at least a couple of possible use cases involving this method, mostly informative ones.

How to force user to change password

$
0
0

Alter password expiry date to force user to change password on next login.

Display user password expiry information.

$ sudo chage -l milosz
Last password change                                    : Jul 24, 2017
Password expires                                        : never
Password inactive                                       : never
Account expires                                         : never
Minimum number of days between password change          : 0
Maximum number of days between password change          : 99999
Number of days of warning before password expires       : 7

Change user password expiry information to require password change on next login.

$ sudo chage -d 0 milosz

Display user password expiry information.

$ sudo chage -l milosz
Last password change                                    : password must be changed
Password expires                                        : password must be changed
Password inactive                                       : password must be changed
Account expires                                         : never
Minimum number of days between password change          : 0
Maximum number of days between password change          : 99999
Number of days of warning before password expires       : 7

User will be forced to change the password on next login.

$ ssh milosz@192.0.2.10
milosz@192.0.2.10's password: *********
You are required to change your password immediately (root enforced)
Linux debian 4.9.0-3-amd64 #1 SMP Debian 4.9.30-2+deb9u2 (2017-06-26) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sun Nov 26 15:29:41 2017 from 192.0.2.254
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for milosz.
(current) UNIX password:  *********
Enter new UNIX password:  *********
Retype new UNIX password: *********
passwd: password updated successfully
Connection to 192.0.2.10 closed.

Alternatively you can use passwd utility to achieve the same result.

$ sudo passwd milosz -e
passwd: password expiry information changed.
$ sudo chage -l milosz
Last password change                                    : password must be changed
Password expires                                        : password must be changed
Password inactive                                       : password must be changed
Account expires                                         : never
Minimum number of days between password change          : 0
Maximum number of days between password change          : 99999
Number of days of warning before password expires       : 7

How to schedule one-time task

$
0
0

Schedule one-time task at specific time or below defined system load using at utility.

Install required software

This utility should be available by default, but in case it isn't, use the following command to install it.

$ sudo apt-get install at

Define users that are allowed to submit jobs

Use /etc/at.allowor/etc/at.deny file to allowordeny users to execute this utility.

The root with uid = 0 is above these rules and can always execute at utlity.

If the /etc/at.allow file exists then it is used to define users that are allowed to execute at utlity.

If the /etc/at.allow does not exist, then /etc/at.deny file is used to define users that are not allowed to execute at utlity.

Look at the following application's source code if you have any doubts.

/* Global functions */
int
check_permission()
{
  uid_t uid = geteuid();
  struct passwd *pentry;
  int    allow = 0, deny = 1;

  if (uid == 0)
    return 1;

  if ((pentry = getpwuid(uid)) == NULL) {
    perror("Cannot access user database");
    exit(EXIT_FAILURE);
  }

  allow = user_in_file(ETCDIR "/at.allow", pentry->pw_name);
  if (allow==0 || allow==1)
    return allow;

  /* There was an error while looking for pw_name in at.allow.
   * Check at.deny only when at.allow doesn't exist.
   */

  deny = user_in_file(ETCDIR "/at.deny", pentry->pw_name);
  return deny == 0;
}

Schedule one-time jobs at specific time

Schedule commands to execute now.

$ at now
warning: commands will be executed using /bin/sh
at> some | commands
at> <EOT>
job 30 at Wed Nov 15  9:53:00 2017

Schedule commands to execute after two hours and do not send mail to user.

$ at -M now + 2 hours
warning: commands will be executed using /bin/sh
at> some | commands
at> <EOT>
job 26 at Wed Nov 15 11:54:00 2017

Schedule commands to execute at 14:00 on December 16, 2018 and send mail to user even if there was no output.

$ at -m 14:00 16.12.2018
warning: commands will be executed using /bin/sh
at> some | commands
at> <EOT>
job 27 at Sun Dec 16 14:00:00 2018

You can specify date using any from the following format: MMDD[CC]YY, MM/DD/[CC]YY, DD.MM.[CC]YY or [CC]YY-MM-DD.

Schedule /opt/bin/reminder.sh shell script to execute tomorrow at 13:30.

$ at -f /opt/bin/reminder.sh 13:30 tomorrow
warning: commands will be executed using /bin/sh job 29 at Thu Nov 16 13:30:00 2017

Use the -v option to display the time the job will be executed before defining it.

$ at -v now + 4 hours
Wed Nov 15 13:55:00 2017

warning: commands will be executed using /bin/sh
at> some | commands
at> <EOT>
job 30 at Wed Nov 15 13:55:00 2017

There are more possibilities as there are many time modifiers available.

now             { COPY_TOK ; return NOW; }
am              { COPY_TOK ; return AM; }
pm              { COPY_TOK ; return PM; }
noon            { COPY_TOK ; return NOON; }
midnight        { COPY_TOK ; return MIDNIGHT; }
teatime         { COPY_TOK ; return TEATIME; }
sun(day)?       { COPY_TOK ; return SUN; }
mon(day)?       { COPY_TOK ; return MON; }
tue(sday)?      { COPY_TOK ; return TUE; }
wed(nesday)?    { COPY_TOK ; return WED; }
thu(rsday)?     { COPY_TOK ; return THU; }
fri(day)?       { COPY_TOK ; return FRI; }
sat(urday)?     { COPY_TOK ; return SAT; }
today           { COPY_TOK ; return TODAY; }
tomorrow        { COPY_TOK ; return TOMORROW; }
next            { COPY_TOK ; return NEXT; }
min             { COPY_TOK ; return MINUTE; }
minute(s)?      { COPY_TOK ; return MINUTE; }
hour(s)?        { COPY_TOK ; return HOUR; }
day(s)?         { COPY_TOK ; return DAY; }
week(s)?        { COPY_TOK ; return WEEK; }
month(s)?       { COPY_TOK ; return MONTH; }
year(s)?        { COPY_TOK ; return YEAR; }
jan(uary)?      { COPY_TOK ; return JAN; }
feb(ruary)?     { COPY_TOK ; return FEB; }
mar(ch)?        { COPY_TOK ; return MAR; }
apr(il)?        { COPY_TOK ; return APR; }
may             { COPY_TOK ; return MAY; }
jun(e)?         { COPY_TOK ; return JUN; }
jul(y)?         { COPY_TOK ; return JUL; }
aug(ust)?       { COPY_TOK ; return AUG; }
sep(tember)?    { COPY_TOK ; return SEP; }
oct(ober)?      { COPY_TOK ; return OCT; }
nov(ember)?     { COPY_TOK ; return NOV; }
dec(ember)?     { COPY_TOK ; return DEC; }
utc             { COPY_TOK ; return UTC; }
[0-9]{1}        { COPY_TOK ; COPY_VAL; return INT1DIGIT; }
[0-9]{2}        { COPY_TOK ; COPY_VAL; return INT2DIGIT; }
[0-9]{4}        { COPY_TOK ; COPY_VAL; return INT4DIGIT; }
[0-9]{5,8}      { COPY_TOK ; COPY_VAL; return INT5_8DIGIT; }
[0-9]+          { COPY_TOK ; COPY_VAL; return INT; }
[0-9]{1,2}\.[0-9]{1,2}\.[0-9]{2}([0-9]{2})?     { COPY_TOK ; COPY_VAL; return DOTTEDDATE; }
[0-9]{2}([0-9]{2})?-[0-9]{1,2}-[0-9]{1,2}       { COPY_TOK ; COPY_VAL; return HYPHENDATE; }
[012]?[0-9][:'h,.][0-9]{2}      { COPY_TOK ; COPY_VAL; return HOURMIN; }

You can read /usr/share/doc/at/timespec for more information.

Schedule one-time task when system goes below specific load

Schedule commands to execute when the load average drops below 1.5.

$ at -b
at> some | commands
at> <EOT>
job 31 at Wed Nov 15 09:55:56 2017

You can alter default value of load average that is used to determine when system load level is low enough to execute commands. To achieve this you need to modifiy daemon's startup parameters.

Queues

By default jobs are assigned to a queue and batch jobs to b queue.

You can assign custom queues from a to z and from A to Z. Jobs in an uppercase queues are treated as batch jobs at the specified time.

$ at -q F teatime
warning: commands will be executed using /bin/sh
at> some | commands
at> <EOT>
job 32 at Wed Nov 15 16:00:00 2017

Get job details

List scheduled jobs.

$ at -l
29	Thu Nov 16 13:30:00 2017 a root
30	Wed Nov 15 13:55:00 2017 a root
26	Wed Nov 15 11:54:00 2017 a root
27	Sun Dec 16 14:00:00 2018 a root
32	Wed Nov 15 16:00:00 2017 F root

Get details for specific job.

$ at -c 30
#!/bin/sh
# atrun uid=0 gid=0
# mail root 0
umask 22
LANG=C.UTF-8; export LANG
container=lxc; export container
USER=root; export USER
PWD=/root; export PWD
HOME=/root; export HOME
SHLVL=1; export SHLVL
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin; export PATH
cd /root || {
	 echo 'Execution directory inaccessible'>&2
	 exit 1
}
some | commands

Remove jobs

Remove scheduled job.

$ at -r 9

System service

Service is configured using systemd service file.

$ cat /etc/systemd/system/multi-user.target.wants/atd.service
[Unit]
Description=Deferred execution scheduler
Documentation=man:atd(8)

[Service]
ExecStart=/usr/sbin/atd -f
IgnoreSIGPIPE=false

[Install]
WantedBy=multi-user.target

Use -l option to specify load average treshold that was mentioned earlier.

ExecStart=/usr/sbin/atd -f -l 4

Reload systemd manager configuration.

$ sudo systemctl daemon-reload

Restart service to apply changes.

$ sudo systemctl restart atd

How to download Raspbian image and write it to SD card

$
0
0

Download Raspbian image and write it SD card using as fewer commands as possible.

Download latest minimal Raspbian image without desktop.

$ wget --content-disposition https://downloads.raspberrypi.org/raspbian_lite_latest
--2018-10-28 19:38:12--  https://downloads.raspberrypi.org/raspbian_lite_latest
Resolving downloads.raspberrypi.org (downloads.raspberrypi.org)... 46.235.227.11, 93.93.128.211, 93.93.128.230, ...
Connecting to downloads.raspberrypi.org (downloads.raspberrypi.org)|46.235.227.11|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://downloads.raspberrypi.org/raspbian_lite/images/raspbian_lite-2018-10-11/2018-10-09-raspbian-stretch-lite.zip [following]
--2018-10-28 19:38:12--  https://downloads.raspberrypi.org/raspbian_lite/images/raspbian_lite-2018-10-11/2018-10-09-raspbian-stretch-lite.zip
Reusing existing connection to downloads.raspberrypi.org:443.
HTTP request sent, awaiting response... 302 Found
Location: http://director.downloads.raspberrypi.org/raspbian_lite/images/raspbian_lite-2018-10-11/2018-10-09-raspbian-stretch-lite.zip [following]
--2018-10-28 19:38:12--  http://director.downloads.raspberrypi.org/raspbian_lite/images/raspbian_lite-2018-10-11/2018-10-09-raspbian-stretch-lite.zip
Resolving director.downloads.raspberrypi.org (director.downloads.raspberrypi.org)... 46.235.227.11, 93.93.128.211, 93.93.128.230, ...
Connecting to director.downloads.raspberrypi.org (director.downloads.raspberrypi.org)|46.235.227.11|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 368317240 (351M) [application/zip]
Saving to: '2018-10-09-raspbian-stretch-lite.zip'

2018-10-09-raspbian-stret 100%[===================================>] 351.25M  1.96MB/s    in 2m 59s

2018-10-28 19:41:12 (1.96 MB/s) - '2018-10-09-raspbian-stretch-lite.zip' saved [368317240/368317240]

Alternatively, download standard Raspian image that includes dektop environment.

$ wget --content-disposition https://downloads.raspberrypi.org/raspbian_latest
--2018-10-28 19:42:24--  https://downloads.raspberrypi.org/raspbian_latest
Resolving downloads.raspberrypi.org (downloads.raspberrypi.org)... 46.235.227.11, 93.93.128.211, 93.93.128.230, ...
Connecting to downloads.raspberrypi.org (downloads.raspberrypi.org)|46.235.227.11|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://downloads.raspberrypi.org/raspbian/images/raspbian-2018-10-11/2018-10-09-raspbian-stretch.zip [following]
--2018-10-28 19:42:24--  https://downloads.raspberrypi.org/raspbian/images/raspbian-2018-10-11/2018-10-09-raspbian-stretch.zip
Reusing existing connection to downloads.raspberrypi.org:443.
HTTP request sent, awaiting response... 302 Found
Location: http://director.downloads.raspberrypi.org/raspbian/images/raspbian-2018-10-11/2018-10-09-raspbian-stretch.zip [following]
--2018-10-28 19:42:24--  http://director.downloads.raspberrypi.org/raspbian/images/raspbian-2018-10-11/2018-10-09-raspbian-stretch.zip
Resolving director.downloads.raspberrypi.org (director.downloads.raspberrypi.org)... 46.235.227.11, 93.93.128.211, 93.93.128.230, ...
Connecting to director.downloads.raspberrypi.org (director.downloads.raspberrypi.org)|46.235.227.11|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1431204144 (1.3G) [application/zip]
Saving to: '2018-10-09-raspbian-stretch.zip'

2018-10-09-raspbian-stret 100%[===================================>]   1.33G  2.40MB/s    in 11m 2s

2018-10-28 19:53:26 (2.06 MB/s) - '2018-10-09-raspbian-stretch.zip' saved [1431204144/1431204144]

Extract image to the standard output and write it your SD card at the same time.

$ unzip -p 2018-10-09-raspbian-stretch-lite.zip | sudo dd of=/dev/mmcblk0 oflag=sync status=progress bs=4M
1859059712 bytes (1,9 GB, 1,7 GiB) copied, 70 s, 26,6 MB/s 
0+14208 przeczytanych rekordów
0+14208 zapisanych rekordów
1866465280 bytes (1,9 GB, 1,7 GiB) copied, 70,2763 s, 26,6 MB/s

Additional notes

I have used InfoZIP's unzip application provided by the unzip package.

$ unzip -v
UnZip 6.00 of 20 April 2009, by Debian. Original by Info-ZIP.

Latest sources and executables are at ftp://ftp.info-zip.org/pub/infozip/ ;
see ftp://ftp.info-zip.org/pub/infozip/UnZip.html for other sites.

Compiled with gcc 6.2.1 20161124 for Unix (Linux ELF).

UnZip special compilation options:
        ACORN_FTYPE_NFS
        COPYRIGHT_CLEAN (PKZIP 0.9x unreducing method not supported)
        SET_DIR_ATTRIB
        SYMLINKS (symbolic links supported, if RTL and file system permit)
        TIMESTAMP
        UNIXBACKUP
        USE_EF_UT_TIME
        USE_UNSHRINK (PKZIP/Zip 1.x unshrinking method supported)
        USE_DEFLATE64 (PKZIP 4.x Deflate64(tm) supported)
        UNICODE_SUPPORT [wide-chars, char coding: UTF-8] (handle UTF-8 paths)
        LARGE_FILE_SUPPORT (large files over 2 GiB supported)
        ZIP64_SUPPORT (archives using Zip64 for large files supported)
        USE_BZIP2 (PKZIP 4.6+, using bzip2 lib version 1.0.6, 6-Sept-2010)
        VMS_TEXT_CONV
        WILD_STOP_AT_DIR
        [decryption, version 2.11 of 05 Jan 2007]

UnZip and ZipInfo environment options:
           UNZIP:  [none]
        UNZIPOPT:  [none]
         ZIPINFO:  [none]
      ZIPINFOOPT:  [none]

How to define backup backend in HAProxy configuration

$
0
0

Define backup backend in HAProxy configuration to choose used backend depending on the number of usable servers.

HAProxy version.

$ haproxy -v
HA-Proxy version 1.7.5-2 2017/05/17
Copyright 2000-2017 Willy Tarreau <willy@haproxy.org>

Default HAProxy configuration.

global
	log /dev/log	local0
	log /dev/log	local1 notice
	chroot /var/lib/haproxy
	stats socket /run/haproxy/admin.sock mode 660 level admin
	stats timeout 30s
	user haproxy
	group haproxy
	daemon

	# Default SSL material locations
	ca-base /etc/ssl/certs
	crt-base /etc/ssl/private

	# Default ciphers to use on SSL-enabled listening sockets.
	# For more information, see ciphers(1SSL). This list is from:
	#  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
	# An alternative list with additional directives can be obtained from
	#  https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
	ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
	ssl-default-bind-options no-sslv3

defaults
	log	global
	mode	http
	option	httplog
	option	dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
	errorfile 400 /etc/haproxy/errors/400.http
	errorfile 403 /etc/haproxy/errors/403.http
	errorfile 408 /etc/haproxy/errors/408.http
	errorfile 500 /etc/haproxy/errors/500.http
	errorfile 502 /etc/haproxy/errors/502.http
	errorfile 503 /etc/haproxy/errors/503.http
	errorfile 504 /etc/haproxy/errors/504.http

Use nbsrv method to get the number of usable servers for given backend and create required ACL rule.

acl is-example-org hdr_dom(host) -i example.org
acl is-example-org-backend-dead nbsrv(example-org-backend) lt 1

use_backend example-org-secondary-backend if is-example-org  is-example-org-backend-dead
use_backend example-org-backend           if is-example-org 

Sample frontend and backend using the specified ACL rule.

frontend web
  bind :80
  #bind :443 ssl crt /etc/ssl/cert/

  option httplog

  option forwardfor except 127.0.0.1
  option forwardfor header X-Real-IP

  #redirect scheme https code 301 if !{ ssl_fc }

  acl is-example-org hdr_dom(host) -i example.org
  acl is-example-org-backend-dead nbsrv(example-org-backend) lt 1

  use_backend example-org-secondary-backend if is-example-org is-example-org-backend-dead
  use_backend example-org-backend           if is-example-org

backend example-org-backend
  mode http
  server example-server-1 10.0.10.15:80
  server example-server-2 10.0.10.16:80

backend example-org-secondary-backend
  mode http
  server example-secondary-server-1 10.0.10.17:80
  server example-secondary-server-2 10.0.10.18:80

Requests will be directed to the example-org-backend backend by default.

Jan 25 15:35:09 example haproxy[721]: 10.66.91.165:42384 [25/Jan/2018:19:35:09.443] web example-org-backend/example-server-1 0/0/0/4/4 200 9386 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"

Request will be directed to the example-org-secondary-backend backend when the first one goes down.

Jan 25 15:36:29 example haproxy[721]: 10.66.91.165:42666 [25/Jan/2018:19:36:29.315] web example-org-secondary-backend/example-secondary-server-1 0/0/0/0/0 200 28948 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"

How to detect chroot environment

$
0
0

Sometimes it makes me wonder whether I am inside chroot environment or not, but I find out that it is very easy to answer this question.

non-chroot environment

All you need to do is look for / directory entry inside /proc/mounts file.

You can assume that positive match means that you are outside of the chroot environment.

$ awk 'BEGIN{exit_code=1} $2 == "/" {exit_code=0} END{exit exit_code}' /proc/mounts
$ echo $?
0

Regular operating system needs to mount /root file-system.

$ cat /proc/mounts
/dev/mapper/vg00-root / ext4 rw,relatime,errors=remount-ro,data=ordered 0 0
[...]

chroot environment

Chroot environment does not need to mount /root filesystem.

$ sudo chroot debian-stretch-amd64
#
# awk 'BEGIN{exit_code=1} $2 == "/" {exit_code=0} END{exit exit_code}' /proc/mounts
# echo $?
1

Bind mounted proc file-system does not show mounted /root file-system.

# cat /proc/mounts
udev /dev devtmpfs rw,nosuid,relatime,size=8160212k,nr_inodes=2040053,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0

Chroot environment without mounted proc file-system will return different error code.

# awk 'BEGIN{exit_code=1} $2 == "/" {exit_code=0} END{exit exit_code}' /proc/mounts
awk: cannot open /proc/mounts (No such file or directory)
# echo $?
2

single process inside chroot environment

Use similar method to determine if a particular process is running inside chroot environment.

Update package index inside chroot environment.

$ sudo chroot debian-stretch-amd64 bash -c "apt-get update"
Ign:1 http://cdn-fastly.deb.debian.org/debian stretch InRelease
Get:2 http://cdn-fastly.deb.debian.org/debian stretch Release [118 kB]
Get:3 http://cdn-fastly.deb.debian.org/debian stretch Release.gpg [2434 B]
Get:4 http://cdn-fastly.deb.debian.org/debian stretch/main amd64 Packages [7123 kB]
Get:5 http://cdn-fastly.deb.debian.org/debian stretch/main Translation-en [5393 kB]
Get:6 http://cdn-fastly.deb.debian.org/debian stretch/contrib amd64 Packages [50.9 kB]
Get:7 http://cdn-fastly.deb.debian.org/debian stretch/contrib Translation-en [45.9 kB]
Get:8 http://cdn-fastly.deb.debian.org/debian stretch/non-free amd64 Packages [78.0 kB]
Get:9 http://cdn-fastly.deb.debian.org/debian stretch/non-free Translation-en [79.2 kB]
Fetched 12.9 MB in 3s (3501 kB/s)
Reading package lists... Done

Install nginx inside chroot environment.

$  sudo chroot debian-stretch-amd64 bash -c "apt-get -y install nginx"
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  fontconfig-config fonts-dejavu-core geoip-database libbsd0 libfontconfig1 libfreetype6 libgd3 libgeoip1 libicu57 libjbig0 libjpeg62-turbo libnginx-mod-http-auth-pam libnginx-mod-http-dav-ext libnginx-mod-http-echo
  libnginx-mod-http-geoip libnginx-mod-http-image-filter libnginx-mod-http-subs-filter libnginx-mod-http-upstream-fair libnginx-mod-http-xslt-filter libnginx-mod-mail libnginx-mod-stream libpng16-16 libssl1.1 libtiff5 libwebp6
  libx11-6 libx11-data libxau6 libxcb1 libxdmcp6 libxml2 libxpm4 libxslt1.1 nginx-common nginx-full sgml-base ucf xml-core
Suggested packages:
  libgd-tools geoip-bin fcgiwrap nginx-doc ssl-cert sgml-base-doc debhelper
The following NEW packages will be installed:
  fontconfig-config fonts-dejavu-core geoip-database libbsd0 libfontconfig1 libfreetype6 libgd3 libgeoip1 libicu57 libjbig0 libjpeg62-turbo libnginx-mod-http-auth-pam libnginx-mod-http-dav-ext libnginx-mod-http-echo
  libnginx-mod-http-geoip libnginx-mod-http-image-filter libnginx-mod-http-subs-filter libnginx-mod-http-upstream-fair libnginx-mod-http-xslt-filter libnginx-mod-mail libnginx-mod-stream libpng16-16 libssl1.1 libtiff5 libwebp6
  libx11-6 libx11-data libxau6 libxcb1 libxdmcp6 libxml2 libxpm4 libxslt1.1 nginx nginx-common nginx-full sgml-base ucf xml-core
0 upgraded, 39 newly installed, 0 to remove and 0 not upgraded.
Need to get 18.6 MB of archives.
After this operation, 58.9 MB of additional disk space will be used.
[...]

Ensure that nginx is started inside chroot environment.

$ sudo chroot debian-stretch-amd64 bash -c "/etc/init.d/nginx start"
[ ok ] Starting nginx: nginx.

Get the process id.

$ sudo chroot debian-stretch-amd64 bash -c "cat /var/run/nginx.pid"
20807

Check root directory for this particular process outside of the chroot environment.

$ sudo stat --format="%N"  /proc/20807/root
'/proc/20807/root' -> '/home/milosz/debian-base-image/debian-stretch-amd64'

Do not perform this check from the inside of the chroot environment as you will get misrepresented result from the point of view of the chroot environment.

$ sudo chroot debian-stretch-amd64 bash -c "stat --format=\"%N\" /proc/20807/root"'/proc/20807/root' -> '/'

How to install Prosody an Open source and modern XMPP communication server

$
0
0

Install Prosody an Open source and modern XMPP communication server.

I will provide Jabber service on example.org domain using xmpp.example.org server and multi-user chat on conference.example.org.

DNS configuration

Create required DNS records. At first define A records for both sub-domains. After that create SRV records to specify the location of Jabber services using format specified in RFC 2782 - DNS SRV RR.

xmpp              10800 IN A 192.0.2.200
conference        10800 IN A 192.0.2.200

_xmpp-client._tcp 10800 IN SRV 0 5 5222 xmpp
_xmpp-server._tcp 10800 IN SRV 0 5 5269 xmpp

_xmpp-server._tcp.conference 10800 IN SRV 0 5 5269 xmpp

Verify DNS configuration.

$ host -t SRV _xmpp-client._tcp.example.org
_xmpp-client._tcp.example.org has SRV record 0 5 5222 xmpp.example.org.
$ host -t SRV _xmpp-server._tcp.example.org
_xmpp-server._tcp.example.org has SRV record 0 5 5269 xmpp.example.org.
$ host -t SRV _xmpp-server._tcp.conference.example.org
_xmpp-server._tcp.conference.example.org has SRV record 0 5 5269 xmpp.example.org.
$ host -t A xmpp.example.org
xmpp.example.org has address 46.165.244.206
$ host -t A conference.example.org
conference.example.org has address 46.165.244.206

Install Jabber/XMPP server

Install packages required to complete installation process.

$ sudo apt-get install wget gnupg2 dirmngr apt-transport-https
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  ca-certificates gnupg gnupg-agent gnupg-l10n libassuan0 libcurl3-gnutls libffi6 libgmp10 libgnutls30 libhogweed4
  libidn2-0 libksba8 libldap-2.4-2 libldap-common libnettle6 libnghttp2-14 libnpth0 libp11-kit0 libpsl5
  libreadline7 librtmp1 libsasl2-2 libsasl2-modules libsasl2-modules-db libsqlite3-0 libssh2-1 libssl1.1
  libtasn1-6 libunistring0 openssl pinentry-curses publicsuffix readline-common
Suggested packages:
  dbus-user-session libpam-systemd pinentry-gnome3 tor parcimonie xloadimage scdaemon gnutls-bin
  libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal libsasl2-modules-ldap libsasl2-modules-otp
  libsasl2-modules-sql pinentry-doc readline-doc
The following NEW packages will be installed:
  apt-transport-https ca-certificates dirmngr gnupg gnupg-agent gnupg-l10n gnupg2 libassuan0 libcurl3-gnutls
  libffi6 libgmp10 libgnutls30 libhogweed4 libidn2-0 libksba8 libldap-2.4-2 libldap-common libnettle6
  libnghttp2-14 libnpth0 libp11-kit0 libpsl5 libreadline7 librtmp1 libsasl2-2 libsasl2-modules libsasl2-modules-db
  libsqlite3-0 libssh2-1 libssl1.1 libtasn1-6 libunistring0 openssl pinentry-curses publicsuffix readline-common
  wget
0 upgraded, 37 newly installed, 0 to remove and 0 not upgraded.
Need to get 11.0 MB of archives.
After this operation, 28.4 MB of additional disk space will be used.
[...]

Add external repository.

$ echo "deb https://packages.prosody.im/debian stretch main" | sudo tee /etc/apt/sources.list.d/prosody.list
deb https://packages.prosody.im/debian stretch main

Import key used to create repository signature. More information about this step can be found in how to download in advance the public key used to sign repository signatures blog post.

$ sudo apt-key --keyring /etc/apt/trusted.gpg.d/prosody.gpg \
               adv \
               --no-default-keyring \
               --keyserver keyserver.ubuntu.com \
               --recv $(wget --quiet \
                             --output-document - \
                             https://packages.prosody.im/debian/dists/stretch/Release.gpg | \
                        gpg --no-default-keyring --list-packets - | \
                        awk '/^:/ {print $NF}')
gpg: directory '/root/.gnupg' created
gpg: keybox '/root/.gnupg/pubring.kbx' created
Executing: /tmp/apt-key-gpghome.mYUsMrsrOD/gpg.1.sh --no-default-keyring --keyserver keyserver.ubuntu.com --recv 7393D7E674D9DBB5
gpg: key 7393D7E674D9DBB5: public key "Prosody IM Debian Packages <developers@prosody.im>" imported
gpg: Total number processed: 1
gpg:               imported: 1

Update package index.

$ sudo apt-get update
Hit:1 http://security.debian.org stretch/updates InRelease
Ign:2 http://deb.debian.org/debian stretch InRelease
Hit:3 http://deb.debian.org/debian stretch Release
Get:5 https://packages.prosody.im/debian stretch InRelease [5918 B]
Get:6 https://packages.prosody.im/debian stretch/main amd64 Packages [1554 B]
Fetched 7472 B in 0s (13.9 kB/s)   
Reading package lists... Done

Install Jabber/XMPP server.

$ sudo apt-get install prosody
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  libexpat1 lua-bitop lua-expat lua-filesystem lua-sec lua-socket lua5.1 ssl-cert
Suggested packages:
  lua-event lua-dbi-mysql lua-dbi-postgresql lua-dbi-sqlite3 lua-zlib openssl-blacklist
The following NEW packages will be installed:
  libexpat1 lua-bitop lua-expat lua-filesystem lua-sec lua-socket lua5.1 prosody ssl-cert
0 upgraded, 9 newly installed, 0 to remove and 10 not upgraded.
Need to get 570 kB of archives.
After this operation, 3374 kB of additional disk space will be used.
Do you want to continue? [Y/n] 
[...]

Copy SSL certificates for main domainexample.org and conferences sub-domain to /etc/prosody/certs directory .

$ sudo ls /etc/prosody/certs/
Makefile
certificate.pem
example.org.pem
example.org.key
conference.example.org.pem
conference.example.org.key
localhost.cnf
localhost.crt
localhost.key
openssl.cnf

Ensure that registration of new accounts via Jabber clients is disabled.

$ grep ^allow_registration /etc/prosody/prosody.cfg.lua
allow_registration = false

Ensure that authentication provider is set to hashed passwords stored using built-in storage.

$ grep ^authentication /etc/prosody/prosody.cfg.lua 
authentication = "internal_hashed"

Define yourself as an admin.

$ sudo sed -i -e "s/^admins = { }/admins = { \"milosz@example.org\" }/" /etc/prosody/prosody.cfg.lua

Define domain on which user accounts can be created.

$ sudo sed -i -e "s/^VirtualHost \"localhost\"/VirtualHost \"example.org\"/" /etc/prosody/prosody.cfg.lua

Enable multi-user conference component.

$ sudo sed -i -e "/VirtualHost \"example.org\"/a \ \ Component \"conference.example.org\" \"muc\"" /etc/prosody/prosody.cfg.lua

Restart Jabber service.

$ sudo prosodyctl restart

Verify that service is running.

$  sudo prosodyctl status
Prosody is running with PID 4644

Add an admin account.

$ sudo prosodyctl register milosz example.org
Enter new password:  *********
Retype new password: *********

Connect using your favourite Jabber/XMPP client.

How to free disk space from deleted but still referenced file

$
0
0

Very rarely and often by a mistake you can end up with deleted file that is still used by some process, which is likely writing to it, so the used space cannot be freed.

The running process is still holding a reference to it, so the used disk space cannot be freed. You can kill this process right away or truncate the file to reclaim used disk space. It will not fix the source of the problem, it is not safe, but can give you enough time to fix this issue properly.

I will describe this specific case on Debian Squeeze, but the whole process is still revelant as of today.

$ lsb_release -a
No LSB modules are available.
Distributor ID:	Debian
Description:	Debian GNU/Linux 6.0.10 (squeeze)
Release:	6.0.10
Codename:	squeeze

Notice that glusterfs process is still writing to the deleted log file.

$ sudo lsof -p 7447
COMMAND    PID USER   FD   TYPE     DEVICE  SIZE/OFF   NODE NAME
glusterfs 7447 root  cwd    DIR        8,1      4096      2 /
glusterfs 7447 root  rtd    DIR        8,1      4096      2 /
glusterfs 7447 root  txt    REG        8,1     57792  16030 /usr/sbin/glusterfsd
glusterfs 7447 root  mem    REG        8,1     90504  15601 /lib/libgcc_s.so.1
glusterfs 7447 root  mem    REG        8,1     92640 127616 /usr/lib/glusterfs/3.2.7/xlator/debug/io-stats.so.0.0.0
glusterfs 7447 root  mem    REG        8,1    121472 127590 /usr/lib/glusterfs/3.2.7/xlator/performance/stat-prefetch.so.0.0.0
glusterfs 7447 root  mem    REG        8,1     86960 127586 /usr/lib/glusterfs/3.2.7/xlator/performance/quick-read.so.0.0.0
glusterfs 7447 root  mem    REG        8,1     71744 127588 /usr/lib/glusterfs/3.2.7/xlator/performance/io-cache.so.0.0.0
glusterfs 7447 root  mem    REG        8,1     45416 127577 /usr/lib/glusterfs/3.2.7/xlator/performance/read-ahead.so.0.0.0
glusterfs 7447 root  mem    REG        8,1     71752 127575 /usr/lib/glusterfs/3.2.7/xlator/performance/write-behind.so.0.0.0
glusterfs 7447 root  mem    REG        8,1    393760 127544 /usr/lib/glusterfs/3.2.7/xlator/cluster/afr.so.0.0.0
glusterfs 7447 root  mem    REG        8,1    234976 127571 /usr/lib/glusterfs/3.2.7/xlator/protocol/client.so.0.0.0
glusterfs 7447 root  mem    REG        8,1     80712  30248 /lib/libresolv-2.11.3.so
glusterfs 7447 root  mem    REG        8,1     22928  34763 /lib/libnss_dns-2.11.3.so
glusterfs 7447 root  mem    REG        8,1     51728  34761 /lib/libnss_files-2.11.3.so
glusterfs 7447 root  mem    REG        8,1     76936 127535 /usr/lib/glusterfs/3.2.7/rpc-transport/socket.so.0.0.0
glusterfs 7447 root  mem    REG        8,1    142168 127592 /usr/lib/glusterfs/3.2.7/xlator/mount/fuse.so.0.0.0
glusterfs 7447 root  mem    REG        8,1   1478056  34754 /lib/libc-2.11.3.so
glusterfs 7447 root  mem    REG        8,1    131261  34764 /lib/libpthread-2.11.3.so
glusterfs 7447 root  mem    REG        8,1     14696  34760 /lib/libdl-2.11.3.so
glusterfs 7447 root  mem    REG        8,1     91952  17982 /usr/lib/libgfxdr.so.0.0.0
glusterfs 7447 root  mem    REG        8,1     88584  17739 /usr/lib/libgfrpc.so.0.0.0
glusterfs 7447 root  mem    REG        8,1    400504  17697 /usr/lib/libglusterfs.so.0.0.0
glusterfs 7447 root  mem    REG        8,1    128744  34748 /lib/ld-2.11.3.so
glusterfs 7447 root    0u   CHR        1,3       0t0   2045 /dev/null
glusterfs 7447 root    1u   CHR        1,3       0t0   2045 /dev/null
glusterfs 7447 root    2u   CHR        1,3       0t0   2045 /dev/null
glusterfs 7447 root    3u  0000        0,9         0   2042 anon_inode
glusterfs 7447 root    4w   REG        8,1 993079299 127832 /var/log/glusterfs/mnt-backup.log.1.old (deleted)
glusterfs 7447 root    5u   CHR     10,229       0t0   7324 /dev/fuse
glusterfs 7447 root    6u  IPv4 1112791139       0t0    TCP gfs-a.local:1022->gfs-a.local:24007 (ESTABLISHED)
glusterfs 7447 root    7u  IPv4 1095043971       0t0    TCP gfs-a.local:1023->gfs-a.local:24009 (ESTABLISHED)
glusterfs 7447 root    8u   REG        8,1         0  16722 /tmp/tmpf6nhOjW (deleted)
glusterfs 7447 root    9u  IPv4 1235641780       0t0    TCP gfs-a.local:1020->gfs-b.local:24009 (ESTABLISHED)
glusterfs 7447 root   10r   CHR        1,9       0t0   2050 /dev/urandom

Filter the data to display only open files that have been unlinked for specific process.

$ sudo lsof -a +L1  -p 7447
COMMAND    PID USER   FD   TYPE DEVICE  SIZE/OFF NLINK   NODE NAME
glusterfs 7447 root    4w   REG    8,1 993079299     0 127832 /var/log/glusterfs/mnt-backup.log.1.old (deleted)
glusterfs 7447 root    8u   REG    8,1         0     0  16722 /tmp/tmpf6nhOjW (deleted)

Log file is cannot be truncated directly as it is not available in the filesystem, but you can access it by using file descriptor opened by this particular process.

$ sudo ls -l /proc/7447/fd
total 0
lrwx------ 1 root root 64 Nov 21 11:02 0  -> /dev/null
lrwx------ 1 root root 64 Nov 21 11:02 1  -> /dev/null
lr-x------ 1 root root 64 Nov 21 11:02 10 -> /dev/urandom
lrwx------ 1 root root 64 Nov 21 11:02 2  -> /dev/null
lrwx------ 1 root root 64 Nov 21 11:02 3  -> anon_inode:[eventpoll]
l-wx------ 1 root root 64 Nov 21 11:02 4  -> /var/log/glusterfs/mnt-backup.log.1.old (deleted)
lrwx------ 1 root root 64 Nov 21 11:02 5  -> /dev/fuse
lrwx------ 1 root root 64 Nov 21 11:02 6  -> socket:[1112791139]
lrwx------ 1 root root 64 Nov 21 11:02 7  -> socket:[1095043971]
lrwx------ 1 root root 64 Nov 21 11:02 8  -> /tmp/tmpf6nhOjW (deleted)
lrwx------ 1 root root 64 Nov 21 11:02 9  -> socket:[1235641780]

Display file descriptors pointing to deleted files in a very simple way.

$ sudo find /proc/7447/fd -ilname "*(deleted)"
/proc/7447/fd/4
/proc/7447/fd/8

Display file descriptors pointing to deleted files in a more useful way.

$ sudo find /proc/7447/fd  -ilname "*(deleted)" -printf "%h/%f -> %l\n"
/proc/7447/fd/4 -> /var/log/glusterfs/mnt-backup.log.1.old (deleted)
/proc/7447/fd/8 -> /tmp/tmpf6nhOjW (deleted)

Display file descriptor pointing to the particular deleted file.

$ sudo find /proc/7447/fd  -lname "*mnt-backup.log.1.old (deleted)"
/proc/7447/fd/4

Truncate this file descriptors.

$ sudo truncate  /proc/7447/fd/4 --size 0

Filter the data to display only open files that have been unlinked for specific process.

$ sudo lsof -a +L1 -p 7447
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF NLINK   NODE NAME
glusterfs 7447 root    4w   REG    8,1      729     0 127832 /var/log/glusterfs/mnt-backup.log.1.old (deleted)
glusterfs 7447 root    8u   REG    8,1        0     0  16722 /tmp/tmpf6nhOjW (deleted)

This will give you some time.

How to disable onboard WiFi and Bluetooth on Raspberry Pi 3

$
0
0

Disable onboard Wifi and Bluetooth on Raspberry Pi 3 device.

Disable onboard WiFi on boot.

$ echo "dtoverlay=pi3-disable-wifi" | sudo tee -a /boot/config.txt

Disable Bluetooth boot.

$ echo "dtoverlay=pi3-disable-bt" | sudo tee -a /boot/config.txt

Disable systemd service that initializez Bluetooth Modems connected by UART.

$ sudo systemctl disable hciuart

Reboot Raspberry Pi device.

$ sudo reboot

You can directly edit config.txt file located on the boot partition before inserting SD card to the Raspberry Pi 3 device, but remember to create ssh file on the same partion. It will execute sshswitch service to start OpenBSD Secure Shell server then remove created file, so you can connect and enable ssh service persistently.

Device Tree overlays are extensively described in GitHub repository.

Viewing all 770 articles
Browse latest View live