Skip to content

My Blog

Veeam VMs deleted – Tape Job

When you delete a VM from VMware, that has been backuped by Veeam and Tape, the tape job will prompt a warning. This can easily be fixed, by disabling and enabling the Tape Job.

Graylog fix wrong field type

Sometimes you’ll get a indexing error, because the field type couldn’t be matched. This will look something like this:

ElasticsearchException[Elasticsearch exception [type=illegal_argument_exception, reason=mapper [serial_number] cannot be changed from type [keyword] to [date]]]

You can fix this, by changing the mapping in opensearch. Usually you do this directely on the index, but with roating indexes like in graylog, this won’t work. That’s why we need to create a template, that will automatically add the mapping to all new indexes. Like this:

echo '{
  "order" : 10,
  "template": "myindex_*",
  "mappings" : {
      "properties" : {
          "serial_number" : {
            "type" : "keyword"
          }

    }
  }
}' > myindex-mappingfix-serial_number.json

curl -X PUT -d @'myindex-mappingfix-serial_number.json' -H 'Content-Type: application/json' 'http://localhost:9200/_template/myindex-mappingfix-serial_number?pretty'

Insert the name of the index (for example graylog, if it is the default index set), and the field name.

After adding this template, you’ll have to rotate the index and it will be applied.

acme.sh & traefik cert issue

acme.sh renew doesn’t work

Let’s tackel the acme.sh issue first. I sent a renew command with manual DNS verification, the renew went through without errors, but the cert didn’t renew. This is a known issue: https://github.com/acmesh-official/acme.sh/issues/4041

The solution is to delete these lines in the config file under ~/.acme.sh/yourdomain/yourdomain.conf:

Le_OrderFinalize='https://acme.zerossl.com/v2/DV90/order/XXXXXXXXXXXX/finalize'
Le_LinkOrder='https://acme.zerossl.com/v2/DV90/order/XXXXXXXXXXXX'
Le_LinkCert='https://acme.zerossl.com/v2/DV90/cert/XXXXXXXXXXXX'
Le_CertCreateTime='1730000000'
Le_CertCreateTimeStr='2024-11-05T18:00:00Z'
Le_NextRenewTimeStr='2025-01-03T18:00:00Z'
Le_NextRenewTime='1740000000'

After that send the usual renew command and it works.

Traefik is not updating the certs after renew

Of course, after renewing the certs, traefik didn’t want to do it’s job. The hot load function didn’t trigger. This can be “fixed” by editing the file provider file. The watcher will pick that up and reload the certs. Just sending a touch command, didn’t do the trick for me.

When you try to add a empty line to the file, make sure you don’t have any spaces in it, or traefik will see it as an invalid config.

VMware Tools Install – Error 21004

When we tried to mount the VMware Tools with the button “Install VMware Tools”, we got the Error 21004. In the vmware.log of that particular VM, we got this error:

2024-11-05T09:27:06.475Z In(05) vmx 4fbba83e-64-7edd ToolsISO: open of /vmfs/volumes/6728b0c4-bcf76a7d-0eee-d404e6734610/vmtools/latest/vmtools/isoimages_manifest.txt failed: Error
2024-11-05T09:27:06.483Z In(05) vmx 4fbba83e-64-7edd FILE:open error on /vmfs/volumes/6728b0c4-bcf76a7d-0eee-d404e6734610/vmtools/latest/vmtools/isoimages_manifest.txt: Operation not permitted
2024-11-05T09:27:06.483Z In(05) vmx 4fbba83e-64-7edd ToolsISO: open of /vmfs/volumes/6728b0c4-bcf76a7d-0eee-d404e6734610/vmtools/latest/vmtools/isoimages_manifest.txt failed: Error
2024-11-05T09:27:06.485Z In(05) vmx 4fbba83e-64-7edd FILE:open error on /vmfs/volumes/6728b0c4-bcf76a7d-0eee-d404e6734610/vmtools/latest/vmtools/windows.iso: Operation not permitted

This is a known issue, where changing the product locker leads to file permission errors. This can be fixed by reapplying the file permissions.

Put your host in maintenance mode and run the command: secpolicytools -p

Self-Signed Cert on Windows

It’s easy to create a self-signed cert on windows:

$domain = "my-domain.example.org"
$certificate = New-SelfSignedCertificate `
-Subject "CN=$domain" `
-CertStoreLocation "Cert:\LocalMachine\My" `
-KeyExportPolicy Exportable `
-KeySpec Signature `
-KeyLength 2048 `
-KeyAlgorithm RSA `
-HashAlgorithm SHA256 `
-Provider "Microsoft Enhanced RSA and AES Cryptographic Provider" `
-NotAfter (Get-Date).AddYears(5) -Verbose

If you have something like the SQL Reporting Service, you’ll have to trust the certificate. So we can extend the command like this:

$domain = "my-domain.example.org"
$certificate = New-SelfSignedCertificate `
-Subject "CN=$domain" `
-CertStoreLocation "Cert:\LocalMachine\My" `
-KeyExportPolicy Exportable `
-KeySpec Signature `
-KeyLength 2048 `
-KeyAlgorithm RSA `
-HashAlgorithm SHA256 `
-Provider "Microsoft Enhanced RSA and AES Cryptographic Provider" `
-NotAfter (Get-Date).AddYears(5) -Verbose

$pwd=ConvertTo-SecureString "password1234" -asplainText -force
Export-PFXCertificate -cert $certificate -file "C:\temp\self.pfx" -Password $pwd
Import-PfxCertificate -FilePath "C:\temp\self.pfx" cert:\LocalMachine\root -Password $pwd

K3S Traefik – HTTP Redirection

You want to redirect all trafic from http to https? You got a K3S Cluster with traefik installed? Fear not my friend.

I almost started crying, no matter what, it didn’t work. Finally I found the solution. We just create a HelmChartConfig which basically injects config into a helm chart, and add 3 lines to it. It will look something like this:

apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: traefik
  namespace: kube-system
spec:
  valuesContent: |-
    ports:
      web:
        redirectTo:
          port: "websecure"

Now just apply this config and everything will be good. 🙂

Kill VMware VM with CLI

esxcli vm process list
esxcli vm process kill -t=hard -w=WorldID
esxcli vm process kill -t=force -w=WorldID

Cron in Docker (PHP)

You wanna get corn running in a container? Maybe a php based container? This sucks. So let’s try to figure it out.

What I did is create a docker image as following:

FROM php:7.3-apache
USER root

# Install Cron & Sudo Package
RUN apt-get update && \
    apt-get install -y cron sudo

# OTHER STUFF GOES HERE

# Copy the script used for cron here
COPY cron_1m.sh /usr/local/bin/cron_1m.sh
COPY cron_5m.sh /usr/local/bin/cron_5m.sh

# Get it up and running
RUN chmod +x /usr/local/bin/cron_1m.sh && \
    chmod +x /usr/local/bin/cron_5m.sh && \
    echo "* * * * * sudo -u www-data /usr/local/bin/cron_1m.sh > /proc/1/fd/1 2>/proc/1/fd/1\n*/5 * * * * sudo -u www-data /usr/local/bin/cron_5m.sh > /proc/1/fd/1 2>/proc/1/fd/1" | crontab -

# Getting PHP running
WORKDIR /var/www/html
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["www-data", "/usr/local/bin/apache2-foreground"]

The entrypoint looks like this:

#!/bin/bash

# Prepare Environment
printenv | grep -v printenv | sed -E 's;^([A-Za-z_]+)=(.*)$;export \1="\2";g' > /etc/container.env

# Change User
su $1
shift

# Exec Command
exec "$@"

And we add a cron.sh file:

#!/bin/bash

# Get environment
source /etc/container.env

# For Debuging: /bin/echo "[$(date)] Running cron (*/5)" # Or (*/1), whatever cron it is

# Get in the right place
cd /var/www/html 

# Running your script here

Now we can use this image normally to run a php container, that does php stuff under the user www-data.

For our cron job we start a second container like this:

services:
  cron:
    build:
      dockerfile: Dockerfile
      context: ./image
    restart: always
    command:
    - "root" # Don't Change to www-data
    - "/usr/sbin/cron" # Run Cron in this container
    - "-f" # Run in foreground  

Environment Variables

Environment Variables are not loaded in your cron jobs. So you have to do this in your entrypoint:

printenv | grep -v printenv | sed -E 's;^([A-Za-z_]+)=(.*)$;export \1="\2";g' > /etc/container.env

And in your cronjob add this:

source /etc/container.env

Elasticsearch 2 Opensearch

You might want to move from elasticsearch to opensearch due to the changes of graylog 5. I was at the same position. We currently deploy mostly single-node standalone environments. This procedure is suited for those environments.

1. Step – Update Mongodb

We are upgrading from 4.2 to 6.0. This can only be done with steps between like 4.2 -> 4.4 -> 5.0 -> 6.0. Use these commands to upgrade:

#####################
# Upgrade 4.2 to 4.4
#####################

fromRepo=4.2
toRepo=4.4
toVersion=-4.4.25-1.el8

sed -i "s;$fromRepo;$toRepo;" /etc/yum.repos.d/mongodb-org.repo

dnf install -y mongodb-org$toVersion \
  mongodb-org-database$toVersion \
  mongodb-org-database-tools-extra$toVersion \
  mongodb-org-mongos$toVersion \
  mongodb-org-server$toVersion \
  mongodb-org-shell$toVersion \
  mongodb-org-tools$toVersion

systemctl restart mongod

mongo --eval "db.adminCommand( { setFeatureCompatibilityVersion: \"$toRepo\" } )"


#####################
# Upgrade 4.4 to 5.0
#####################

fromRepo=4.4
toRepo=5.0
toVersion=-5.0.21-1.el8

# Run same script


#####################
# Upgrade 5.0 to 6.0
#####################

fromRepo=5.0
toRepo=6.0
toVersion=

# Run same script

Step 2 – Install Opensearch

You can do this manually or with a script/ansible role. Make sure during configuration to use a different port.

I like to start graylog up quickly after the installation to let it connect to opensearch and set stuff up already.

If graylog acts up because of a missmatch with elastic/opensearch versions, you can force it into a specific version with this setting:

elasticsearch_version = 7

See https://github.com/Graylog2/graylog2-server/issues/12897

You have to edit 2 settings manually:

# Let Opensearch run on a different port while migrating
http.port: 9211

# Allow reindexing from elasticsearch
reindex.remote.allowlist: ["localhost:9200"]

Step 3 – Prepare Bash

Use these variables:

elasticServer=localhost:9200
opensearchServer=localhost:9211

Step 4 – Prepare Template

I had some issues with tempaltes not working correctely, that’s why I use this template to “fix” it:

curl -XPUT $opensearchServer/_template/graylog-field-fix  -H 'Content-Type: application/json' -d '
{
  "order" : 1,
  "index_patterns" : [
    "*_*"
  ],
  "mappings" : {
    "dynamic_templates" : [
      {
        "internal_fields" : {
          "mapping" : {
            "type" : "keyword"
          },
          "match_mapping_type" : "string",
          "match" : "gl2_*"
        }
      },
      {
        "store_generic" : {
          "mapping" : {
            "type" : "keyword"
          },
          "match_mapping_type" : "string"
        }
      }
    ],
    "properties": {
      "gl2_processing_timestamp" : {
        "format" : "uuuu-MM-dd HH:mm:ss.SSS",
        "type" : "date"
      },
      "gl2_accounted_message_size" : {
        "type" : "long"
      },
      "gl2_receive_timestamp" : {
        "format" : "uuuu-MM-dd HH:mm:ss.SSS",
        "type" : "date"
      },
      "full_message" : {
        "fielddata" : false,
        "analyzer" : "standard",
        "type" : "text"
      },
      "streams" : {
        "type" : "keyword"
      },
      "message" : {
        "fielddata" : false,
        "analyzer" : "standard",
        "type" : "text"
      },
      "timestamp" : {
        "format" : "uuuu-MM-dd HH:mm:ss.SSS",
        "type" : "date"
      }

    }
  },
  "settings" : {
    "index" : {
      "mapping.total_fields.limit" : "10000"
    }
  }
}'

Step 5 – Data Migration

You can now start a reindex script to migrate data like this:

curl -XGET $elasticServer/_cat/indices?v 2>/dev/null | cat | awk '{print $3}' |tail -n+2 | while read indexName ; do

  echo "################################################"
  echo "$(date) - $indexName - create"
  curl -XPUT $opensearchServer/$indexName  -H 'Content-Type: application/json'  -d '{
    "settings": {
      "index": {
        "blocks" : {
          "write" : "false",
          "metadata" : "false",
          "read" : "false"
        },
      "number_of_shards": "1",
      "number_of_replicas": "0"
      }
    }
  }'



  echo "################################################"
  echo "$(date) - $indexName - start reindex"

  curl http://$opensearchServer/_reindex?pretty -XPOST -H 'Content-Type: application/json' -d "{
      \"source\": {
        \"remote\": {
          \"host\": \"http://$elasticServer\"
        },
        \"index\": \"$indexName\"
      },
      \"dest\": {
        \"index\": \"$indexName\"
      }
    }"



  echo "################################################"
  echo "$(date) - $indexName - delete"
  curl -XDELETE $elasticServer/$indexName

  echo "################################################"
  echo "$(date) - $indexName - done"
  echo "################################################"
  echo "################################################"

done

Step 6 – Uninstall Elasticsearch

Now uninstall elasticsearch, change the opensearch port to the original and make sure it is enabled. If you used the elasticsearch_version fix, remove it.

Step 7 – Optional: Fix Reindex

If something with the template didn’t work out, you can reindex everything like this:

curl -XGET localhost:9200/_cat/indices?v 2>/dev/null |cat | awk '{print $3}' |tail -n+2 | grep -v ".open" | while read line ; do

  echo "$(date) - $line - start"
  curl -XPUT localhost:9200/$line/_settings  -H 'Content-Type: application/json'  -d '{
    "settings": {
      "index.blocks.write": "true"
    }
  }' 
  echo "$(date) - $line - clone"
  curl -XPOST localhost:9200/$line/_clone/old_$line 
  echo "$(date) - $line - delete"
  curl -XDELETE localhost:9200/$line 
  echo "$(date) - $line - reindex"
  curl http://localhost:9200/_reindex?pretty -XPOST -H 'Content-Type: application/json' -d "{
    \"source\": {
      \"index\": \"old_$line\"
    },
    \"dest\": {
      \"index\": \"$line\"
    }
  }" 
  echo "$(date) - $line - delete"
  curl -XDELETE localhost:9200/old_$line 
  echo "$(date) - $line - done"


done

Step 8 – Optional: Fix Settings

You can now fix index settings like this:

# Block writes on everything and set replica count to 0
curl -XGET http://localhost:9200/_cat/indices?v | awk '{print $3}' | xargs -I{} curl -XPUT localhost:9200/{}/_settings  -H 'Content-Type: application/json'  -d '{
    "settings": {
      "index": {
        "blocks" : {
          "write" : "true",
          "metadata" : "false",
          "read" : "false"
        },
        "number_of_replicas": "0"
      }
    }
  }'

# Unblock writes on active indexes
curl -XGET http://localhost:9200/_cat/aliases 2>/dev/null | awk '{print $2}' | xargs -I{} curl -XPUT localhost:9200/{}/_settings  -H 'Content-Type: application/json'  -d '{
    "settings": {
      "index": {
        "blocks" : {
          "write" : "false",
          "metadata" : "false",
          "read" : "false"
        }
      }
    }
  }'

If the deflector breaks fix like this:

curl -X POST "localhost:9200/_aliases?pretty" -H 'Content-Type: application/json' -d'                                                                
{
  "actions" : [
    { "add" : { "index" : "graylog_X", "alias" : "graylog_deflector" } }
  ]
}'

Fix Graylog Watermark

If you monitor your graylog server already and use a single node instance, there is no real need for a watermark on your open/elasticsearch server. Here you go:

curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
  "transient": {

    "cluster.routing.allocation.disk.watermark.low": "50g",
    "cluster.routing.allocation.disk.watermark.high": "1g",
    "cluster.routing.allocation.disk.watermark.flood_stage": "512m",
    "cluster.info.update.interval": "15m"
  }
}'