Showing posts with label System Administration. Show all posts
Showing posts with label System Administration. Show all posts

Monday, April 21, 2025

Monitoring transient network traffic session

 Sometimes there is a need to investigate network traffic that is transient. To make the problem clearer, let's examine this example. The firewall indicates some network traffic was blocked:


Block IPv4 link-local (1000000102) 192.168.99.99:35018 169.254.169.254:80 TCP:S 

We  want to figure out which process that sent out the packets. So, we would do something like


sudo netstat -anp | grep 35018

Unfortunately, this yields nothing because at the time we issue the netstat command port 35018 is not open. It turns out the network traffic is short-lived. How do we figure out which process sends out the packets? Of course, we could try to capture the packets:


tcpdump -XX -i any host 169.254.169.254 and port 80

which indeed captures the packets, and also shows the header and content of the packets captured. Sometimes, the packet header and the content are sufficiently for us to figure out what progress sent out the packets. However, what if the packet header and the content do not offer a clue?

It turns out, we can use sysdig, for instance, we can use it in this way:


sysdig -p '*%evt.num  %evt.time   %evt.cpu   %proc.name   (%thread.tid %proc.ppid)   %evt.dir %evt.type %evt.info' fd.rip=169.254.169.254 and fd.rport=80

which tells us the process that sent out the packets and the parent process PID. The process that sent out the packets may have gone, but it is offen that the parent process is still around. This solves us the problem because it offers a way to investigate further.

Friday, February 21, 2025

Enabling NAT and IP Masquerading on Rocky Linux 9

This is a note about enabling NAT (SNAT, more precisely) and IP masquerading on a Linux host that runs Rocky Linux 9. The host has two network interfaces: eth0 and wg0.  Interface eth0 connects to the outside network and is assigned an public IP address while interface wg0 is on a private network. The objective is to make the Linux host as router for the private network so that the traffic originated from the private network can go to the outside network. The steps to achieve this objective using firewalld are as follows:

  1. Enable IPv4 forwarding
          echo "net.ipv4.ip_forward = 1" | sudo tee -a /etc/sysctl.conf
          sudo sysctl -p
        
  2. Assign interface eth0 to the external zone
         firewall-cmd --permanent --zone=external --change-interface=eth0
        
  3. Assign interface wg0 to the internal zone
         firewall-cmd --permanent --zone=internal --change-interface=wg0
        
  4. Set the zone target of the internal zone to ACCEPT
         firewall-cmd --permanent --zone=internal --set-target=ACCEPT
        
  5. Finally, reload firewalld's configuration.
         firewall-cmd --reload
        

There is no need to meddle with anything else, such as adding nftables rules and set masquerading for the outward facing network interface. This is because the external zone is by default with masqerading enabled. This can be verified by

firewall-cmd --zone=external --query-masquerade
    

or by looking at the zone definition file at /usr/lib/firewalld/zones/external.xml.

In addition, the external zone's is also enabled to forward packets. We can examine this by looking at the zone definition file at /usr/lib/firewalld/zones/external.xml or by

firewall-cmd --zone=external --query-forward
    

The issue seems to lie at the zones' targets. First, let's view the zones' configuaration::

firewall-cmd --zone=external --list-all
    

Of course, we can also just check the target:

firewall-cmd --permanent --zone=external --get-target
    
firewall-cmd --zone=internal --list-all
    

Of course, we can also just check the target:

firewall-cmd --permanent --zone=internal --get-target
    

The targets of the both external and internal zones are both originally default. The internal zone's default target is in fact interpreted as reject, thus, preventing from packet forwarding to the outside network. This is explained as

For a forwarded packet that ingresses zoneA and egresses zoneB:
  • if zoneA's target is ACCEPT, DROP, or REJECT then the packet is accepted, dropped, or rejected respectively.
  • if zoneA's target is default, then the packet is accepted, dropped, or rejected based on zoneB's target. If zoneB's target is also default, then the packet will be rejected by firewalld's catchall reject.

Since both ingress (internal) and egress (external) are both "default", the result is that the internal zone's target becomes REJECT".

One question, I have in mind is, why do I not assign the internal facing interface to the trusted zone? That might be for another day.

Reference

This note benefited tremendously from the following resources:

  1. https://askubuntu.com/questions/1463093/what-is-target-default-of-a-zones-configuration-in-firewalld
  2. https://github.com/firewalld/firewalld/issues/590#issuecomment-605200548
  3. man firewall-cmd
  4. man firewalld.zone
  5. man firewalld

 

Wednesday, February 19, 2025

Runing dnf package manager on Linux with small memory

Running dnf package manager can sometimes be difficult on Linux hosts with small memory. I observed on a Rocky Linux 9 with 1 GB RAM after enabled epel, and dnf install would sometimes be killed due to OOM.

To address this issue, we can create and enable a swap space:

$ sudo dd if=/dev/zero of=/swapfile count=1024 bs=1MiB
$ sudo chmod 600 /swapfile
$ sudo mkswap /swapfile
$ sudo swapon /swapfile
$ sudo dnf update

Once done, we then turn off the swap space:

$ sudo swapoff /swapfile

Reference

This idea come from this Stackoverflow post

 

 

 

 

 

Thursday, December 12, 2024

Solution for problem: rootless Docker container cannot ping outside networks

I am running a rootless docker container on a Ubuntu host (24.04 LTS). However, I cannot ping the host where the container is running and the outside network. The workaround I created are two steps:

  1. Run the container with the --privileged option, as in
    docker container run --privileged 
  2. On the host where the container is running, set Linux kernel parameber `net.ipv4.ping_group_range` to include the group id that runs the container. For instance, if the group id of the user that runs the container is 3000, we can set the parameter as follows:
    echo "3000 3000" > /proc/sys/net/ipv4/ping_group_range

If tests indicate that pings are successful in the container, we can set the kernel parameter through a configuration file so that the setting can survive reboot, e.g.,

  • On the host that the container is running, create a file, e.g., /etc/sysctl.d/99-ping-group-range.conf as in:
    echo "net.ipv4.ping_group_range=3000 3000" \
           > /etc/sysctl.d/99-ping-group-range.conf

The idea of these is from

  1. https://github.com/containers/podman/issues/2488
  2. https://opennms.discourse.group/t/how-to-allow-unprivileged-users-to-use-icmp-ping/1573

Wednesday, October 2, 2024

SSH Publication Key Authentication Fails When Home is on NFS

As the title stated, regardless how I try, I couldn't get SSH publication key authentication to work for a Linux host. It turns out that the Linux host that runs the SSH server has SELinux enabled. To make public key authnentication work for SSH, we simply need to configure SELinux, i.e.,


sudo setsebool -P use_nfs_home_dirs 1

Friday, May 3, 2024

Cannot start cmd.exe on Windows 10

Somehow I encountered the problem that I could not start the Windows Command Prompt (cmd.exe). The solution turns out is to remove a key from the registry. A number of posts points to the removal of HKCU\Software\Microsoft\Command Processor\AutoRun. A complexity comes from the factor that the user account is a standard user account; howeer, regedit needs to run as an administrator, which means the HKCU is the administrator, not the standard user.

To address this issue, we can perform the following steps

  1. Figure out the user's sid:
    
        whoami /user
        
    The sid begins with S- that we can easily recognize from the output.
  2. Open regedit, and browse to HKEY_USERS, to the user according to the user's sid, to Software, to Microsoft, to Command Processor, and then locate AutoRun, and remove it.

A StackOverflow post indicates several more keys to remove, but it is not necessary in my case, but it is good to document it, just in case in the future


reg delete "HKCU\Console" /f
reg delete "HKCU\Software\Microsoft\Command Processor" /v "AutoRun" /f
reg delete "HKLM\Software\Microsoft\Command Processor" /v "AutoRun" /f
reg delete "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File 
Execution Options\cmd.exe" /f

Wednesday, February 21, 2024

Installing Git and Other Tools on Linux Systems without Administrative Privilege

Sometimes I want to install software tools, such as Git, Screen, and the others on a Linux System, however, I find outselves without administraive priviledge. The first method comes to mind is to download the source code and to compile and to set it up. This method can be sometimes challenging due to numerous dependencies may also be missing on the system.

Recently it comes to me that we can do this via conda. For instance, the following steps let me install both Git and Screen on a Linux system without administrative priviledge

  1. Download miniconda.
    
    wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
        
  2. Set up miniconda
    
        bash Miniconda3-latest-Linux-x86_64.sh
        
  3. Initialize conda. Exit shell and get back in, and then
    
        conda init
        
  4. Install Git via conda
    
        conda install anaconda::git
        
  5. Install Screen via conda
    
        conda install conda-forge::screen
        
  6. Find and install others ...

Some may think this method is overkill. However, it saves me tons of time to download and compile tons of dependencies. Is our own time more valuable?

Sunday, January 14, 2024

Wednesday, September 20, 2023

Setting up Conda Virtual Environment for Tensorflow

These steps are for create a Python virtual environment for running Tensorflow on GPU. The steps work on Fedora Linux 38 and Ubuntu 22.04 LTS:

To install miniconda, we can do as a regular user:


curl -s "https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh" | bash

Following that, we create a conda virtual environment for Python.


# create conda virtual environment
conda create -n tf213 python=3.11 pip

# activate the environment in order to install packages and libraries
conda activate tf213

#
# the following are from Tensorflow pip installation guide
#
# install CUDA Toolkit 
conda install -c conda-forge cudatoolkit=11.8.0

# install python packages
pip install nvidia-cudnn-cu11==8.6.0.163 tensorflow==2.13.*

#
# setting up library and tool search paths
# scripts in activate.d shall be run when the environment
# is being activated
#
mkdir -p $CONDA_PREFIX/etc/conda/activate.d
# get CUDNN_PATH
echo 'CUDNN_PATH=$(dirname $(python -c "import nvidia.cudnn;print(nvidia.cudnn.__file__)"))' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
# set LD_LIBRARY_PATH
echo 'export LD_LIBRARY_PATH=$CUDNN_PATH/lib:$CONDA_PREFIX/lib/:$LD_LIBRARY_PATH' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
# set XLA_FLAGS (for some systems, without this, it will lead to a 'libdevice not found at ./libdevice.10.bc' error
echo 'export XLA_FLAGS=--xla_gpu_cuda_data_dir=$CONDA_PREFIX' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh

To test it, we can run


source $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

Enjoy!

Monday, September 18, 2023

Mounting File Systems in a Disk Image on Linux

On Linux systems, we can create disk image using the dd command. This post lists the steps to mount file systems, in particular, LVM volumes in an image of a whole disk, which is often created as follows,


dd if=/dev/sdb of=/mnt/disk1/sdb.img bs=1M status=progress

Assuming the disk has multiple partitions, how do we mount the file systems on these partitions? The following are the steps,


# 1. mount the disk where the disk image is
#    we assume the disk is /dev/sdb1, and we mount
#    it on directory win
sudo mount /dev/sdb1 win

# 2. map the partitions to loopback devices
#    here we assume the disk image is win/disks/disk1.img
sudo losetup -f -P win/disks/disk1.img

# 3. list the LVM volumes
sudo lvdisplay

# 4. suppose from the input of the above command, 
#    the volumne is shown as /dev/mylvm/lvol0,
#    and we want it mounted on directory lvol0
sudo mount /dev/mylvm/lvol0 lvol0

# 5. do something we want ...


# 6. unmount the volume
sudo umount lvol0

# 7. deactivate LVM volume
#    we can query, confirm the volume group by
#    vgdisplay
sudo vgchange -a n mylvm

# 8. detatch the loopback device
#    assuming the device is /dev/loop0
sudo losetup -d /dev/loop0

# 9. umount the disk
sudo umount win

Sunday, September 17, 2023

Mounting ZFS Dataset as /home

The following steps work:


# list ZFS pools and datasets
zfs list

# Query current mount point for a ZFS dataset, e.g., mypool/mydataset
zfs get mountpoint mypool/mydataset

# Set new mountpoint to /home
zfs set mountpoint=/home mypool/mydataset

# Always verify
zfs list
zfs get mountpoint mypool/mydataset

Persistent Mount Bind

The following works:


/from_dir_path   /to_dir_path  none    bind,nofail

Thursday, March 30, 2023

Binding Process to TCP/UDP Port Failure on Windows

Windows has the concept of reserved TCP/UDP ports. These ports can nonetheless be used by any other application. These can be annoying because the reserved ports would not shown be used when we query used ports using netstat. For instance, if we want to bind TCP port 23806 to an application, we determine the availability using the netstat command, such as


C:> netstat -anp tcp | find ":23806"

C:>

The output is blank, which means that the port is unused. However, when we attempt to bind the port to a process of our choice, we encounter an error, such as


bind [127.0.0.1]:23806: Permission denied

This is annoying. The reason is that the port somehow becomes a reserved port. To see this, we can query reserved ports, e.g.,


C:> netsh int ipv4 show excludedportrange protocol=tcp

Protocol tcp Port Exclusion Ranges

Start Port    End Port
----------    --------
      1155        1254
      ...          ...
     23733       23832
     23833       23932
     50000       50059     *

* - Administered port exclusions.


C:>
  

which shows that 23806 is now a served port. What is really annoying is that the range can be updated by Windows dynamically. There are several methods to deal with this.

  1. Method 1. Stop and start the Windows NAT Driver service.
    
      net stop winnat
      net start winnat
      
    After this, query the reserved the ports again. It is often the reserved ports are much limited when compared to before, e.g.,
    
    C:>netsh int ipv4 show excludedportrange protocol=tcp
    
    Protocol tcp Port Exclusion Ranges
    
    Start Port    End Port
    ----------    --------
          2869        2869
          5357        5357
         50000       50059     *
    
    * - Administered port exclusions.
    
    C:>
      
  2. Method 2. If you don't wish to use this feature of Windows, we can disable reserved ports.
    
    reg add HKLM\SYSTEM\CurrentControlSet\Services\hns\State /v EnableExcludedPortRange /d 0 /f
    

Sunday, March 26, 2023

Installing GPU Driver for PyTorch and Tensorflow

To use GPU for PyTorch and Tensorflow, a method I grow fond of is to install GPU driver from RPM fusion, in particular, on Debian or Fedora systems where only free packages are included in their repositories. Via this method, we only install the driver from RPM fusion, and use Python virtual environment to bring in CUDA libraries.

  1. Configure RPM Fusion repo by following the instruction, e.g., as follows:
    
        sudo dnf install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
        
  2. Install driver, e.g.,
    
        sudo dnf install akmod-nvidia
        
  3. Add CUDA support, i.e.,
    
        sudo dnf install xorg-x11-drv-nvidia-cuda
        
  4. Check driver by running nvidia-smi. If it complains about not being able to connect to the driver, reboot the system.

If we use PyTorch or Tensorflow only, there is need to install CUDA from Nvidia.

Reference

  1. https://rpmfusion.org/Configuration

Tuesday, February 7, 2023

Tensorflow Complains "successful NUMA node read from SysFS had negative value (-1)"

To test GPU support for Tensorflow, we should run the following according to the manual of Tensorflow


python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

However, in my case, I saw an annoying message:


2023-02-07 14:40:01.345350: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero

A Stack Overflow discussion has an excellent explanation about this. I have a single CPU and a single GPU installed on the system. The system is a Ubuntu 20.04 LTS. Following the advice given over there, the following command gets rid of the message,


su -c "echo 0 | tee /sys/module/nvidia/drivers/pci:nvidia/*/numa_node"

That is sweet!

Reference

  1. https://www.tensorflow.org/install/pip#linux_setup
  2. https://stackoverflow.com/questions/44232898/memoryerror-in-tensorflow-and-successful-numa-node-read-from-sysfs-had-negativ

Saturday, February 4, 2023

Checking RAM Type on Linux

We can use the following command to check RAM types and slots


sudo dmidecode --type 17

Reloading WireGuard Configuration File without Completely Restarting WireGuard Session

On Linux systems, under bash, we can run the following command to reload and apply a revised WireGuard configuration file without restarting and distrupting the clients


wg syncconf wg0 <(wg-quick strip wg0)

Note that this command may not work for shells other than bash. However, we can always complete this in a three step fashion.


wg-quick strip wg0 > temp_wg0.conf
wg syncconf wg0 temp_wg0.conf
rm temp_wg0.conf

Determining File System of Current Directory on Linux

On Linux, a simple command can reveal the file system the directory is actually located. The command is


df -hT . Filesystem

Sunday, January 29, 2023

Ressetting Network Stack on Windows

Sometimes, I want to reset the network stack on Windows. I found that Intel has a good documentation for it. I copy the steps below:

Resetting the network stack


ipconfig /release
ipconfig /flushdns
ipconfig /renew
netsh int ip reset
netsh winsock reset

Wednesday, January 25, 2023

Disabling Linux Boot Splash Window

Most Linux systems use Plymouthd to display the Splash scren during boot. If you are running the computer as a server and do not log in from the console, the Plymouthd can sometimes bring more trouble than it is worth. For one, to display the Splash window, Plymouthd needs to interact with the driver of the graphics adapter in the system, and if there is an issue here, the system will not boot successfully. Since the server's console may not be conveniently accessed, this can be a real inconvenience.

To remove it on Linux systems like Fedora and Redhat, we can do the following,


sudo grubby --update-kernel=ALL --remove-args="quiet"
sudo grubby --update-kernel=ALL --remove-args="rhgb"
# directly edit /etc/default/grub and add "rd.plymouth=0 plymouth.enable=0" to GRUB_CMDLINE_LINUX
vi /etc/default/grub
sudo grub2-mkconfig -o /etc/grub2.cfg
sudo dnf remove plymouth