Skip to content

2025

vCenter Change FQDN/URL for Login

# Stop vCenter UI
service-control --stop vsphere-ui

# Backup Config
cp /etc/vmware/vsphere-ui/webclient.properties /root/webclient.proberties.BAK.$(date +%s)

# Edit File and uncomment settings + add domains
vi /etc/vmware/vsphere-ui/webclient.properties
##################
sso.serviceprovider.alias.whitelist=vcenter,vcenter.domain.local
##################

# Start vCenter UI
service-control --start vsphere-ui

The application ‘add’ does not exist.

You’re trying to run dotnet add package <PACKAGE> on a RHEL based system and get this error:

he command could not be loaded, possibly because:
  * You intended to execute a .NET application:
      The application 'add' does not exist.
  * You intended to execute a .NET SDK command:
      No .NET SDKs were found.

Download a .NET SDK:
https://aka.ms/dotnet-download

Learn about SDK resolution:
https://aka.ms/dotnet/sdk-not-found

Issue

The problem is, the .rpm inside the microsoft repo has dependencies on packages like dotnet-hostfxr-6.0.x86_64, which are also present in the default AppStream repo. So it gets the packages from the wrong repository.

Solution

You’re in luck. There is a simple fix:

dnf remove dotnet-* dnf install --repo=packages-microsoft-com-prod dotnet-sdk-6.0

SSH for Windows

A lot of people might be aware, that windows supports the openssh client natively now. Just had over to powershell and enter ssh server.example.org and you can start a ssh connection.

But…

How many knew that you can setup openssh server too?

Let’s give it a try:

# Install Feature
Get-WindowsCapability -Online | ? {$_.Name -like 'OpenSSH.Server*' } |Select -first 1  |Add-WindowsCapability -Online

# Start the sshd service
Start-Service sshd

# OPTIONAL but recommended:
Set-Service -Name sshd -StartupType 'Automatic'

# Confirm the Firewall rule is configured. It should be created automatically by setup. Run the following to verify
if (!(Get-NetFirewallRule -Name "OpenSSH-Server-In-TCP" -ErrorAction SilentlyContinue | Select-Object Name, Enabled)) {
    Write-Output "Firewall Rule 'OpenSSH-Server-In-TCP' does not exist, creating it..."
    New-NetFirewallRule -Name 'OpenSSH-Server-In-TCP' -DisplayName 'OpenSSH Server (sshd)' -Enabled True -Direction Inbound -Protocol TCP -Action Allow -LocalPort 22
} else {
    Write-Output "Firewall rule 'OpenSSH-Server-In-TCP' has been created and exists."
}

Shrink Thin Provisioned Disk

Here’s a very quick overview over how to shirnk a bloated vmdk.

Check Datastore

Get ID of Datastore:

esxcli storage core device list | grep -B1 '  Display Name:'

Check state of Datastore:

$ esxcli storage core device list -d naa.668a828100177dc6c624663100000006 | grep 'Thin Provisioning\|Attached Filter\|VAAI\|Revision'
   Revision: XXXX
   Thin Provisioning Status: yes
   Attached Filters:
   VAAI Status: supported

$ esxcli storage core device vaai status get -d naa.668a828100177dc6c624663100000006 | grep 'Delete Status'
   Delete Status: supported

Clear File-System

Windows

You can use SDelete by Sysinternals: https://docs.microsoft.com/en-us/sysinternals/downloads/sdelete

sdelete.exe -z C:\

Linux

Create a file containing zeros and fill disk with it:

dd if=/dev/zero of=/[mounted-volume]/zeroes && rm -f /[mounted-volume]/zeroes

Shrink VMDK

Now you can ssh into the esx and “punsh” the zeros:

vmkfstools -K [disk].vmdk

OpenSSL Convert

PFX to Key

openssl pkcs12 -in filename.pfx -nocerts -out key_pw.pem

Remove password from key

openssl rsa -in key_pw.pem -out key.pem

PFX to Cert

openssl pkcs12 -in filename.pfx -clcerts -nokeys -out cert.pem

Fixing PFX withs Issues (pfx -> pem -> pfx)

Open Powershell in the bin folder of OpenSSL and run these commands (replace old.pfx is your current, new.pfx will be the fixed one. You might have to enter the pfx password a few times):

openssl.exe pkcs12 -in old.pfx -nocerts -out key_pw.pem
openssl.exe rsa -in key_pw.pem -out key.pem
openssl.exe pkcs12 -in old.pfx -clcerts -nokeys -out cert.pem
openssl.exe pkcs12 -inkey key.pem -in cert.pem -export -out new.pfx

LVM Disk Resize

Reload Disk Info

echo 1 > /sys/block/sdX/device/rescan

Resize Disk

Use parted to resize partition if needed

# for example /dev/sda
$ parted /dev/sdX
(parted) print free
(...)
Number  Start   End     Size    File system  Name                  Flags
        17.4kB  1049kB  1031kB  Free Space
 1      1049kB  630MB   629MB   fat32        EFI System Partition  boot, esp
 2      630MB   1704MB  1074MB  xfs
 3      1704MB  64.4GB  62.7GB                                     lvm
        64.4GB  164.4GB 100GB   Free Space

(parted) resizepart 3
End?  [64.4GB]? 164.4GB
(parted) quit

Resize pv

# for example /dev/sda3
pvresize /dev/sdXX

Resize lv

# Specific size
lvresize --size 1.34t /dev/mapper/cl-root

# Full
lvresize -l+100%FREE /dev/mapper/cl-root

Resize fs

# for ext3
resize2fs 
# for xfs
xfs_growfs /

Graylog Migration

How

The easiest way to migrate a graylog instance, is to build a new one and migrate elasticsearch data by combining the two elasticsearch nodes to a cluster and replicating all data.

Which issues will I have?

Hopefully none  But realistically, you will have to reinstall content-packs or at least clone & delete all streams. They seem to be failing during the import.

Procedure

Stop graylog-server before you start with anything!

Setup Elasticsearch Cluster

Open port 9200 & 9300 (for NFT it is: nft add rule default INPUT tcp dport 9200 accept ; nft add rule default INPUT tcp dport 9300 accept

Prepares node by editing conf file vim /etc/elasticsearch/elasticsearch.yml and change:

New One

node.name: uniqlog-02
cluster.name: graylog
network.host: 10.10.2.38
http.port: 9200
discovery.seed_hosts: ["10.10.2.37", "10.10.2.38"]
cluster.initial_master_nodes: ["10.10.2.37", "10.10.2.38"]
node.master: true

Old One

node.name: uniqlog-01
cluster.name: graylog
network.host: 10.10.2.37
http.port: 9200
discovery.seed_hosts: ["10.10.2.37", "10.10.2.38"]
cluster.initial_master_nodes: ["10.10.2.37", "10.10.2.38"]
node.master: true

Restart elasticsearch on both nodes systemctl restart elasticearch & check if the cluster has 2 nodes: curl -XGET http://10.10.2.37:9200/_cluster/health?pretty

If they don’t want tu cluster, try renaming /var/lib/elasticsearch/nodes/0/_state on the new server, while having elasticsearch stopped. {.is-info}

Check on state

while true ; do echo "$(date +%T) $(curl -s -XGET http://10.6.248.20:9200/_cat/indices?v | awk '{print $3}' | tail -n +2 | xargs -I{} curl -s -XGET http://10.6.248.20:9200/_cat/shards/{}/ | grep "UNASSIGNED" | wc -l)" ; sleep 1; done

Migrate Data

curl -s -XGET http://10.10.2.37:9200/_cat/indices?v
curl -s -XGET http://10.10.2.38:9200/_cat/indices?v

If you check the indexes you will see, that they aren’t replicated yet:

Increase replicas for all indexes:

curl -s -XGET http://10.10.2.37:9200/_cat/indices?v | awk '{print $3}' | tail -n +2 | xargs -I{} curl -XPUT 10.10.2.37:9200/{}/_settings -H 'Content-Type: application/json' -d '{
  "index" : {
    "number_of_replicas" : 1,
    "auto_expand_replicas": false
  }
}'

Restart elasticsearch again if necessary + wait and watch (all shards need to be allocated + indixes on green):

watch "curl -s -XGET http://10.10.2.37:9200/_cat/indices?v ; echo ==========================; curl -s -XGET http://10.10.2.38:9200/_cat/shards?v"

# OR

watch echo "$(curl -s -XGET http://10.129.205.31:9200/_cat/shards?v | grep "205.30" | wc -l) / $(curl -s -XGET http://10.129.205.31:9200/_cat/shards?v | grep "  p  " | wc -l)"

Evacuate NOde (if needed):

curl -XPUT 10.10.2.37:9200/_cluster/settings -d '{
  "transient" : {
    "cluster.routing.allocation.exclude._ip" : "10.10.2.37"
  }
}'

Breakup Cluster

Remove Voting rights for old node:

curl -X POST "10.10.2.38:9200/_cluster/voting_config_exclusions?node_names=uniqlog-02"

Reset replica count:

curl -s -XGET http://10.10.2.38:9200/_cat/indices?v | awk '{print $3}' | tail -n +2 | xargs -I{} curl -XPUT 10.10.2.38:9200/{}/_settings -H 'Content-Type: application/json' -d '{
  "index" : {
    "number_of_replicas" : 0
  }
}'

Stop elasticsearch on the old server

Edit the elasticsearch config on the new server & restart:

node.name: uniqlog-02
cluster.name: graylog
network.host: localhost
http.port: 9200
discovery.seed_hosts: ["localhost"]
cluster.initial_master_nodes: ["localhost"]
node.master: true

Remove voting exlusion:

curl -X DELETE "localhost:9200/_cluster/voting_config_exclusions"

Dump & Restore MongoDB

On the old server, dump mongodb with mongodump & copy the whole dump folder to the new server. Import it with mongorestore dump

Finished

start graylog

Reindex if it doesnt work

newIP=10.10.2.38

# Update template shards (just ignore the fact, that elasticsearch can't handle basic shit like update..... it creates a duplicate.....)
curl -s -XPOST http://$newIP:9200/_index_template/shrttrm_-template -H 'Content-Type: application/json' -d '{
  "index_patterns" : ["shrttrm__*"],
  "template": {
    "settings": {
      "number_of_shards": 1
    }
  }
}'

curl -s -XGET http://$newIP:9200/_cat/indices?v | grep -i yellow | awk '{print $3}' | while read line ; do
    curl -s -XPUT "$newIP:9200/${line}_reindex" -d "{
    $(curl -s -XGET "$newIP:9200/${line}/_mapping" | tail -n +3 | head -n -2 )
  }"

    echo "$line -> ${line}_reindex"
  curl -s -XPOST http://$newIP:9200/_reindex -H 'Content-Type: application/json' -d "{
    \"source\": {
        \"index\": \"${line}\"
    },
    \"dest\": {
        \"index\": \"${line}_reindex\",
            \"version_type\": \"external\"
    }
  }"
done



POST _reindex
{
  "source": {
    "index": "my-index-000001"
  },
  "dest": {
    "index": "my-new-index-000001"
  }
}

Exchange Online SMTP Annonymous Relay

Add Firewallrule:

ip saddr YOURIP/32 tcp dport 25 accept

Install:

dnf install -y postfix

echo "
myhostname = smtp.suter.dev
mydomain = noreply.suter.dev
mynetworks = 127.0.0.0/8 YOURIPHERE
myorigin = \$myhostname
relayhost =  [YOURSERVER.mail.protection.outlook.com]
disable_dns_lookups = yes
smtpd_client_restrictions = permit_mynetworks, reject
smtpd_relay_restrictions = permit_mynetworks, reject
" > /etc/postfix/main.cf

systemctl restart postfix

In O365 create a connector for anonymous releay from this ip.

Bash copy only missing directories

# Make sure you either use a tailing / on both or none of the following variables!!!
SOURCE_DIR="/path/to/source"
DEST_DIR="/path/to/dest"

find $SOURCE_DIR -maxdepth 1 -type d | sed -E "s;^$SOURCE_DIR;;" | while read dir ; do
  test -d $DEST_DIR$dir || /bin/cp -Rf $SOURCE_DIR$dir $DEST_DIR$dir
done

Bash Arguments

VAR1="a"
VAR2="b"
VAR3="c"
ARGS=""

while [[ $# -gt 0 ]]; do
  case "$1" in
    -1|--var1) # --var1 value
      VAR1="$2"
      shift ; shift
      ;;
    -2|--var2) # --var2 value
      SEARCHPATH="$2"
      shift ; shift
      ;;
    -l|--lib) # switch used as --var3
      VAR3=1
      shift
      ;;
    *) # get all other stuff
      ARGS="$*"
      shift
      ;;
  esac
done