Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts

Monday, April 21, 2025

Monitoring transient network traffic session

 Sometimes there is a need to investigate network traffic that is transient. To make the problem clearer, let's examine this example. The firewall indicates some network traffic was blocked:


Block IPv4 link-local (1000000102) 192.168.99.99:35018 169.254.169.254:80 TCP:S 

We  want to figure out which process that sent out the packets. So, we would do something like


sudo netstat -anp | grep 35018

Unfortunately, this yields nothing because at the time we issue the netstat command port 35018 is not open. It turns out the network traffic is short-lived. How do we figure out which process sends out the packets? Of course, we could try to capture the packets:


tcpdump -XX -i any host 169.254.169.254 and port 80

which indeed captures the packets, and also shows the header and content of the packets captured. Sometimes, the packet header and the content are sufficiently for us to figure out what progress sent out the packets. However, what if the packet header and the content do not offer a clue?

It turns out, we can use sysdig, for instance, we can use it in this way:


sysdig -p '*%evt.num  %evt.time   %evt.cpu   %proc.name   (%thread.tid %proc.ppid)   %evt.dir %evt.type %evt.info' fd.rip=169.254.169.254 and fd.rport=80

which tells us the process that sent out the packets and the parent process PID. The process that sent out the packets may have gone, but it is offen that the parent process is still around. This solves us the problem because it offers a way to investigate further.

Friday, February 21, 2025

Enabling NAT and IP Masquerading on Rocky Linux 9

This is a note about enabling NAT (SNAT, more precisely) and IP masquerading on a Linux host that runs Rocky Linux 9. The host has two network interfaces: eth0 and wg0.  Interface eth0 connects to the outside network and is assigned an public IP address while interface wg0 is on a private network. The objective is to make the Linux host as router for the private network so that the traffic originated from the private network can go to the outside network. The steps to achieve this objective using firewalld are as follows:

  1. Enable IPv4 forwarding
          echo "net.ipv4.ip_forward = 1" | sudo tee -a /etc/sysctl.conf
          sudo sysctl -p
        
  2. Assign interface eth0 to the external zone
         firewall-cmd --permanent --zone=external --change-interface=eth0
        
  3. Assign interface wg0 to the internal zone
         firewall-cmd --permanent --zone=internal --change-interface=wg0
        
  4. Set the zone target of the internal zone to ACCEPT
         firewall-cmd --permanent --zone=internal --set-target=ACCEPT
        
  5. Finally, reload firewalld's configuration.
         firewall-cmd --reload
        

There is no need to meddle with anything else, such as adding nftables rules and set masquerading for the outward facing network interface. This is because the external zone is by default with masqerading enabled. This can be verified by

firewall-cmd --zone=external --query-masquerade
    

or by looking at the zone definition file at /usr/lib/firewalld/zones/external.xml.

In addition, the external zone's is also enabled to forward packets. We can examine this by looking at the zone definition file at /usr/lib/firewalld/zones/external.xml or by

firewall-cmd --zone=external --query-forward
    

The issue seems to lie at the zones' targets. First, let's view the zones' configuaration::

firewall-cmd --zone=external --list-all
    

Of course, we can also just check the target:

firewall-cmd --permanent --zone=external --get-target
    
firewall-cmd --zone=internal --list-all
    

Of course, we can also just check the target:

firewall-cmd --permanent --zone=internal --get-target
    

The targets of the both external and internal zones are both originally default. The internal zone's default target is in fact interpreted as reject, thus, preventing from packet forwarding to the outside network. This is explained as

For a forwarded packet that ingresses zoneA and egresses zoneB:
  • if zoneA's target is ACCEPT, DROP, or REJECT then the packet is accepted, dropped, or rejected respectively.
  • if zoneA's target is default, then the packet is accepted, dropped, or rejected based on zoneB's target. If zoneB's target is also default, then the packet will be rejected by firewalld's catchall reject.

Since both ingress (internal) and egress (external) are both "default", the result is that the internal zone's target becomes REJECT".

One question, I have in mind is, why do I not assign the internal facing interface to the trusted zone? That might be for another day.

Reference

This note benefited tremendously from the following resources:

  1. https://askubuntu.com/questions/1463093/what-is-target-default-of-a-zones-configuration-in-firewalld
  2. https://github.com/firewalld/firewalld/issues/590#issuecomment-605200548
  3. man firewall-cmd
  4. man firewalld.zone
  5. man firewalld

 

Wednesday, February 19, 2025

Runing dnf package manager on Linux with small memory

Running dnf package manager can sometimes be difficult on Linux hosts with small memory. I observed on a Rocky Linux 9 with 1 GB RAM after enabled epel, and dnf install would sometimes be killed due to OOM.

To address this issue, we can create and enable a swap space:

$ sudo dd if=/dev/zero of=/swapfile count=1024 bs=1MiB
$ sudo chmod 600 /swapfile
$ sudo mkswap /swapfile
$ sudo swapon /swapfile
$ sudo dnf update

Once done, we then turn off the swap space:

$ sudo swapoff /swapfile

Reference

This idea come from this Stackoverflow post

 

 

 

 

 

Thursday, December 12, 2024

Solution for problem: rootless Docker container cannot ping outside networks

I am running a rootless docker container on a Ubuntu host (24.04 LTS). However, I cannot ping the host where the container is running and the outside network. The workaround I created are two steps:

  1. Run the container with the --privileged option, as in
    docker container run --privileged 
  2. On the host where the container is running, set Linux kernel parameber `net.ipv4.ping_group_range` to include the group id that runs the container. For instance, if the group id of the user that runs the container is 3000, we can set the parameter as follows:
    echo "3000 3000" > /proc/sys/net/ipv4/ping_group_range

If tests indicate that pings are successful in the container, we can set the kernel parameter through a configuration file so that the setting can survive reboot, e.g.,

  • On the host that the container is running, create a file, e.g., /etc/sysctl.d/99-ping-group-range.conf as in:
    echo "net.ipv4.ping_group_range=3000 3000" \
           > /etc/sysctl.d/99-ping-group-range.conf

The idea of these is from

  1. https://github.com/containers/podman/issues/2488
  2. https://opennms.discourse.group/t/how-to-allow-unprivileged-users-to-use-icmp-ping/1573

Wednesday, October 2, 2024

SSH Publication Key Authentication Fails When Home is on NFS

As the title stated, regardless how I try, I couldn't get SSH publication key authentication to work for a Linux host. It turns out that the Linux host that runs the SSH server has SELinux enabled. To make public key authnentication work for SSH, we simply need to configure SELinux, i.e.,


sudo setsebool -P use_nfs_home_dirs 1

Wednesday, February 21, 2024

Installing Git and Other Tools on Linux Systems without Administrative Privilege

Sometimes I want to install software tools, such as Git, Screen, and the others on a Linux System, however, I find outselves without administraive priviledge. The first method comes to mind is to download the source code and to compile and to set it up. This method can be sometimes challenging due to numerous dependencies may also be missing on the system.

Recently it comes to me that we can do this via conda. For instance, the following steps let me install both Git and Screen on a Linux system without administrative priviledge

  1. Download miniconda.
    
    wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
        
  2. Set up miniconda
    
        bash Miniconda3-latest-Linux-x86_64.sh
        
  3. Initialize conda. Exit shell and get back in, and then
    
        conda init
        
  4. Install Git via conda
    
        conda install anaconda::git
        
  5. Install Screen via conda
    
        conda install conda-forge::screen
        
  6. Find and install others ...

Some may think this method is overkill. However, it saves me tons of time to download and compile tons of dependencies. Is our own time more valuable?

Wednesday, September 20, 2023

Setting up Conda Virtual Environment for Tensorflow

These steps are for create a Python virtual environment for running Tensorflow on GPU. The steps work on Fedora Linux 38 and Ubuntu 22.04 LTS:

To install miniconda, we can do as a regular user:


curl -s "https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh" | bash

Following that, we create a conda virtual environment for Python.


# create conda virtual environment
conda create -n tf213 python=3.11 pip

# activate the environment in order to install packages and libraries
conda activate tf213

#
# the following are from Tensorflow pip installation guide
#
# install CUDA Toolkit 
conda install -c conda-forge cudatoolkit=11.8.0

# install python packages
pip install nvidia-cudnn-cu11==8.6.0.163 tensorflow==2.13.*

#
# setting up library and tool search paths
# scripts in activate.d shall be run when the environment
# is being activated
#
mkdir -p $CONDA_PREFIX/etc/conda/activate.d
# get CUDNN_PATH
echo 'CUDNN_PATH=$(dirname $(python -c "import nvidia.cudnn;print(nvidia.cudnn.__file__)"))' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
# set LD_LIBRARY_PATH
echo 'export LD_LIBRARY_PATH=$CUDNN_PATH/lib:$CONDA_PREFIX/lib/:$LD_LIBRARY_PATH' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
# set XLA_FLAGS (for some systems, without this, it will lead to a 'libdevice not found at ./libdevice.10.bc' error
echo 'export XLA_FLAGS=--xla_gpu_cuda_data_dir=$CONDA_PREFIX' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh

To test it, we can run


source $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

Enjoy!

Monday, September 18, 2023

Mounting File Systems in a Disk Image on Linux

On Linux systems, we can create disk image using the dd command. This post lists the steps to mount file systems, in particular, LVM volumes in an image of a whole disk, which is often created as follows,


dd if=/dev/sdb of=/mnt/disk1/sdb.img bs=1M status=progress

Assuming the disk has multiple partitions, how do we mount the file systems on these partitions? The following are the steps,


# 1. mount the disk where the disk image is
#    we assume the disk is /dev/sdb1, and we mount
#    it on directory win
sudo mount /dev/sdb1 win

# 2. map the partitions to loopback devices
#    here we assume the disk image is win/disks/disk1.img
sudo losetup -f -P win/disks/disk1.img

# 3. list the LVM volumes
sudo lvdisplay

# 4. suppose from the input of the above command, 
#    the volumne is shown as /dev/mylvm/lvol0,
#    and we want it mounted on directory lvol0
sudo mount /dev/mylvm/lvol0 lvol0

# 5. do something we want ...


# 6. unmount the volume
sudo umount lvol0

# 7. deactivate LVM volume
#    we can query, confirm the volume group by
#    vgdisplay
sudo vgchange -a n mylvm

# 8. detatch the loopback device
#    assuming the device is /dev/loop0
sudo losetup -d /dev/loop0

# 9. umount the disk
sudo umount win

Sunday, September 17, 2023

Wednesday, August 16, 2023

Bus Error (Core Dumped)!

I was training a machine learning model written in PyTorch on a Linux system. During the training, I encountered "Bus error (core dumped)." This error produces no stack trace. Eventually, I figured it out that this was resulted in the exhaustion of shared memory whose symptom is that  "/dev/shm" is full. 

To resolve this issue, I simply double the size of "/dev/shm", following the instruction given in this Stack Overflow post,

How to resize /dev/shm?

Basically, it is to edit the /etc/fstab file. If the file already has an entry for /dev/shm, we simply increase its size. If not, we add a line to the file, such as

none /dev/shm tmpfs defaults,size=32G 0 0

To bring it to effect, we remount the file system, as in,

sudo mount /dev/shm

 

Tuesday, February 7, 2023

Tensorflow Complains "successful NUMA node read from SysFS had negative value (-1)"

To test GPU support for Tensorflow, we should run the following according to the manual of Tensorflow


python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

However, in my case, I saw an annoying message:


2023-02-07 14:40:01.345350: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero

A Stack Overflow discussion has an excellent explanation about this. I have a single CPU and a single GPU installed on the system. The system is a Ubuntu 20.04 LTS. Following the advice given over there, the following command gets rid of the message,


su -c "echo 0 | tee /sys/module/nvidia/drivers/pci:nvidia/*/numa_node"

That is sweet!

Reference

  1. https://www.tensorflow.org/install/pip#linux_setup
  2. https://stackoverflow.com/questions/44232898/memoryerror-in-tensorflow-and-successful-numa-node-read-from-sysfs-had-negativ

Saturday, February 4, 2023

Checking RAM Type on Linux

We can use the following command to check RAM types and slots


sudo dmidecode --type 17

Reloading WireGuard Configuration File without Completely Restarting WireGuard Session

On Linux systems, under bash, we can run the following command to reload and apply a revised WireGuard configuration file without restarting and distrupting the clients


wg syncconf wg0 <(wg-quick strip wg0)

Note that this command may not work for shells other than bash. However, we can always complete this in a three step fashion.


wg-quick strip wg0 > temp_wg0.conf
wg syncconf wg0 temp_wg0.conf
rm temp_wg0.conf

Determining File System of Current Directory on Linux

On Linux, a simple command can reveal the file system the directory is actually located. The command is


df -hT . Filesystem

Friday, January 27, 2023

Mysterious bash while read var behavior understood!

This is note about a mysterious behavior of while read var of the Bash shell. To understand the problem, let's consider the following problem:

Given a text file called example.txt as follows, write a Bash shell script called join_lines.sh to join the lines


BEGIN Line 1 Line 1
Line 1 Line 1
BEGIN Line 2 Line 2
Line 2 Line 2
Line 2 Line 2
Line 2
BEGIN Line 3 Line 3 Line 3
Line 3
Line 3

The output should be 3 lines, as illustrated in the example below:


$ ./join_lines.sh
Joined Line: BEGIN Line 1 Line 1 Line 1 Line 1
Joined Line: BEGIN Line 2 Line 2 Line 2 Line 2 ine 2 Line 2 Line 2
Joined Line: BEGIN Line 3 Line 3 Line 3 ine 3 ine 3

Our first implementation of join_lines.sh is as follows:


#!/bin/bash

joined=""
cat test.txt | \
    while read line; do
        echo ${line} | grep -E -q "^BEGIN"
        if [ $? -eq 0 ]; then
            if [ "${joined}" != "" ]; then
                echo "Joind Line: ${joined}"
                joined=""
            fi
        fi
        joined="${joined} ${line}"
    done
echo "Joind Line: ${joined}"

Unfortunately, the output is actually the following:


$ ./join_lines.sh
Joind Line:  BEGIN Line 1 Line 1 Line 1 Line 1
Joind Line:  BEGIN Line 2 Line 2 Line 2 Line 2 Line 2 Line 2 Line 2
Joind Line:
$

Why does variable joined lose its value? That is a mystery, isn't it? To understand this, let's revise the script to print out the process ID's of the shell. The revised version is as follows:


#!/bin/bash

joined=""
cat example.txt | \
    while read line; do
        echo ${line} | grep -E -q "^BEGIN"
        if [ $? -eq 0 ]; then
            if [ "${joined}" != "" ]; then
                echo "In $$ $BASHPID: Joind Line: ${joined}"
                joined=""
            fi
        fi
        joined="${joined} ${line}"
    done
echo "In $$ $BASHPID: Joind Line: ${joined}"

If we run this revised script, we shall get something like the following:


$ ./join_lines.sh
In 7065 7067: Joind Line:  BEGIN Line 1 Line 1 Line 1 Line 1
In 7065 7067: Joind Line:  BEGIN Line 2 Line 2 Line 2 Line 2 Line 2 Line 2 Line 2
In 7065 7065: Joind Line:
$

By carefully examine the output, we can see that $$ and $BASHPID have different values at the first two lines. So, what is the difference between $$ and $BASHPID and why are they different?

The Bash manaual page states this:


$ man bash
...
 BASHPID
              Expands  to  the  process  ID of the current bash process.  This
              differs from $$ under certain circumstances, such  as  subshells
              that  do  not require bash to be re-initialized.  Assignments to
              BASHPID have no effect.  If BASHPID is unset, it loses its  spe‐
              cial properties, even if it is subsequently reset.
 ...
$

The above experiment actually reveals that the while read-loop actually needs to run in a subshell. In fact, there are two variables, both called joined, one lives in the parent and the other the child bash process. A simple fix to the script would be to put the while read-loop and the last echo command in a subshell, e.g., as follows:


#!/bin/bash

joined=""
cat example.txt | \
	( \
    while read line; do
        echo ${line} | grep -E -q "^BEGIN"
        if [ $? -eq 0 ]; then
            if [ "${joined}" != "" ]; then
                echo "In $$ $BASHPID: Joind Line: ${joined}"
                joined=""
            fi
        fi
        joined="${joined} ${line}"
    done
echo "In $$ $BASHPID: Joind Line: ${joined}" \
	)

Let's run this revised script. We shall get:


$ ./join_lines.sh
In 7119 7121: Joind Line:  BEGIN Line 1 Line 1 Line 1 Line 1
In 7119 7121: Joind Line:  BEGIN Line 2 Line 2 Line 2 Line 2 Line 2 Line 2 Line 2
In 7119 7121: Joind Line:  BEGIN Line 3 Line 3 Line 3 Line 3 Line 3

The mystery is solved!

Wednesday, January 25, 2023

Disabling Linux Boot Splash Window

Most Linux systems use Plymouthd to display the Splash scren during boot. If you are running the computer as a server and do not log in from the console, the Plymouthd can sometimes bring more trouble than it is worth. For one, to display the Splash window, Plymouthd needs to interact with the driver of the graphics adapter in the system, and if there is an issue here, the system will not boot successfully. Since the server's console may not be conveniently accessed, this can be a real inconvenience.

To remove it on Linux systems like Fedora and Redhat, we can do the following,


sudo grubby --update-kernel=ALL --remove-args="quiet"
sudo grubby --update-kernel=ALL --remove-args="rhgb"
# directly edit /etc/default/grub and add "rd.plymouth=0 plymouth.enable=0" to GRUB_CMDLINE_LINUX
vi /etc/default/grub
sudo grub2-mkconfig -o /etc/grub2.cfg
sudo dnf remove plymouth

Wednesday, January 18, 2023

More Space Needed on Root File System When installing CUDA Kit

Following the instruction on Nivdia's site, I was setting up CUDA Kit on a Fedora Linux host, and encountered a problem that the installation process failed due to not encough free space on the root file system, as indicated by the error message below


$ sudo dnf -y install cuda
...
Running transaction check
Transaction check succeeded.
Running transaction test
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'dnf clean packages'.
Error: Transaction test error:
  installing package cuda-nvcc-12-0-12.0.76-1.x86_64 needs 67MB more space on the / filesystem
  installing package cuda-gdb-12-0-12.0.90-1.x86_64 needs 84MB more space on the / filesystem
  installing package cuda-driver-devel-12-0-12.0.107-1.x86_64 needs 85MB more space on the / filesystem
  installing package cuda-libraries-devel-12-0-12.0.0-1.x86_64 needs 85MB more space on the / filesystem
  installing package cuda-visual-tools-12-0-12.0.0-1.x86_64 needs 85MB more space on the / filesystem
  installing package cuda-documentation-12-0-12.0.76-1.x86_64 needs 85MB more space on the / filesystem
  installing package cuda-demo-suite-12-0-12.0.76-1.x86_64 needs 98MB more space on the / filesystem
  installing package cuda-cuxxfilt-12-0-12.0.76-1.x86_64 needs 99MB more space on the / filesystem
  installing package cuda-cupti-12-0-12.0.90-1.x86_64 needs 210MB more space on the / filesystem
  installing package cuda-cuobjdump-12-0-12.0.76-1.x86_64 needs 210MB more space on the / filesystem
  installing package cuda-compiler-12-0-12.0.0-1.x86_64 needs 210MB more space on the / filesystem
  installing package cuda-sanitizer-12-0-12.0.90-1.x86_64 needs 248MB more space on the / filesystem
  installing package cuda-command-line-tools-12-0-12.0.0-1.x86_64 needs 248MB more space on the / filesystem
  installing package cuda-tools-12-0-12.0.0-1.x86_64 needs 248MB more space on the / filesystem
  installing package cuda-toolkit-12-0-12.0.0-1.x86_64 needs 248MB more space on the / filesystem
  installing package cuda-12-0-12.0.0-1.x86_64 needs 248MB more space on the / filesystem
  installing package cuda-12.0.0-1.x86_64 needs 248MB more space on the / filesystem

Error Summary
-------------
Disk Requirements:
   At least 248MB more space needed on the / filesystem.
...
$

It turns out that CUDA is installed at the /usr/local directory, and indeed, the free space on / is low. The solution to this problem is to mount the /usr/local directory to a file system that has sufficient disk space. The following steps illustrates this solultion, provided that the file system mounted at /disks/disk1 has sufficient space


sudo mkdir /disks/disk1/local
sudo rsync -azvf /usr/local/* /disks/disk1/local/
sudo rm -r/usr/local
sudo mkdir /usr/local
sudo mount --bind /disks/disk1/local /usr/local
sudo cp /etc/fstab /etc/fstab.bu
su -c "echo \
  '/disks/disk1/local /usr/local none defaults,bind,nofail,x-systemd.device-timeout=2 0 0' \
  >> /etc/fstab"

Tuesday, January 17, 2023

Installing Missing LaTeX Packages?

I recently discovered that I can easily install missing LaTeX packages on Fedora Linux, that is, via


sudo dnf install 'tex(beamer.cls)' 
sudo dnf install 'tex(hyperref.sty)' 

Can we do the similar on Debian/Ubuntu distributions?

Reference

  1. https://docs.fedoraproject.org/en-US/neurofedora/latex/

Monday, January 16, 2023

Creating and Starting KVM Virtual Machine: Basic Steps

This is just a note for docummenting the basic steps to create and start KVM virtual machines on Linux systems

  1. Make a plan for virtual machine resources. For this, we should query host resources.
    
        # show available disk spaces
        df -h
        # show available memory
        free -m
        # CPUs
        lscpu
        
  2. Assume we are installing an Ubuntu server system. We shall download the ISO image for the system, e.g.,
    
        wget \
          https://releases.ubuntu.com/22.04.1/ubuntu-22.04.1-live-server-amd64.iso \
          -O /var/lib/libvirt/images/ubuntu-22.04.1-live-server-amd64.iso
        
  3. Create a virtual disk for the virtual machine, e.g.,
    
        sudo truncate --size=10240M /var/lib/libvirt/images/officeservice.img
        
  4. Decide how we should configure the virtual machine network. First, we query existing ones:
    
        virsh --connect qemu:///system  net-list --all
        
  5. Now create a virtual machine and set up Ubuntu Linux on it, e.g.,
    
        sudo virt-install --name ubuntu \
        --description 'Ubuntu Server LTS' \
        --ram 4096 \
        --vcpus 2 \
        --disk path=/var/lib/libvirt/images/officeservice.img,size=10 \
        --osinfo detect=on,name=ubuntu-lts-latest \
        --network network=default \
        --graphics vnc,listen=127.0.0.1,port=5901 \
        --cdrom /var/lib/libvirt/images/ubuntu-22.04.1-live-server-amd64.iso  \
        --noautoconsole \
        --connect qemu:///system
        
  6. Suppose that you connect to Linux host via ssh via a Windows host. We cannot directly access the console of the virtual machine (that is at 127.0.0.1:5901 via VNC). In this case, we tunnel to the Linux host (assume its host name is LinuxHost) from the Windows host:
    
        ssh -L 15901:localhost:5901 LinuxHost
        
  7. We can now access the control via a VNC Viewer at the Windows host at localhost:15901.
  8. Once Ubuntu installation is over, we would lose the VNC connectivity. But, we can list the virtual machine created.
    
        sudo virsh --connect qemu:///system list --all
        
  9. To start the virtual machine, we run
    
        sudo virsh --connect qemu:///system  start ubuntu
        
  10. To make the virtual machine to start when we boot the host, set the virtual machine to be autostart, e.g.,
    
    	virsh --connect qemu:///system autostart ubuntu
    	

References

  1. https://docs.fedoraproject.org/en-US/quick-docs/getting-started-with-virtualization/
  2. https://ubuntu.com/blog/kvm-hyphervisor
  3. https://askubuntu.com/questions/160152/virt-install-says-name-is-in-use-but-virsh-list-all-is-empty-where-is-virt-i
  4. https://www.cyberciti.biz/faq/rhel-centos-linux-kvm-virtualization-start-virtual-machine-guest/
  5. https://www.cyberciti.biz/faq/howto-linux-delete-a-running-vm-guest-on-kvm/