Home Lab Proxmox Upgrade and using Lenovo/AMD DASH

What is DASH?

This is what Lenovo says

Dash (desktop and mobile architecture for system hardware) is a set of specifications developed by
dmtf, which aims to provide open standards based web service management for desktop and mobile
client systems. Dash is a comprehensive framework that provides a new generation of standards to
protect the security of out of band and remote management of desktop and mobile systems in multi
vendor, distributed enterprise environments. Dash uses the same tools, syntax, semantics, and
interfaces across the product line (traditional desktop systems, mobile and laptop computers, blade
PCs, and thin clients).


Why are you using it?

I’ve upgraded my homelab Proxmox instance to a Lenovo ThinkCenter M75q Gen2 with an AMD Ryzen Pro


  • Processor: AMD Ryzen™ 5 PRO 5650GE Processor (3.40 GHz up to 4.40 GHz)
  • Operating System: Windows 11 Pro 64
  • Graphic Card: Integrated AMD Radeon™ Graphics
  • Memory: 8 GB Non-ECC DDR4-3200MHz (SODIMM)
  • Storage: 256 GB SSD M.2 2280 PCIe TLC Opal
  • AC Adapter / Power Supply: 65W
  • Networking: Integrated Gigabit Ethernet
  • WiFi Wireless LAN Adapters: Intel® Wireless-AC 9260 2×2 AC & Bluetooth® 5.1 or above

It’s not a bad system, moving from a Dell OptiPlex 7060

  • Processor: Intel(R) Core(TM) i7-8700T CPU @ 2.40GHz
  • Memory: 32 GB Non-ECC DDR4 Memory Non-ECC 2666MHz

Upgrades to the ThinkCenter M75q Gen2

Since there’s only 8GB at purchase, I opted for Non-ECC 32GB kit, the ThinkCenter M75q Gen2 supports up to 64GB but felt that it wasn’t necessary to go 64GB yet. Also I didn’t do ECC as this was a low cost upgrade.

I dropped in a Kingston NVMe v4 1TB drive and a 1TB SSD drive just because I had it laying around.

AMD Management Console Downloads

You will need to download the AMD Management Console


Setting up DASH

I was able to enable DASH supprt in the BIOS but didn’t find any configuration options.

Additional Resources

Archiving Facebook Messages and Facebook Marketplace Messages

Too many Facebook Messages

I had a ton of Facebook Marketplace messages that I was annoyed with and wanted archived. So I found many resources online on using the Chrome console to run javascript to archive the messages.

Javascript Gist and More

I found a gist with the needed code, but it didn’t work. Upon reading the comments, there was updated code buried and another Github repository.

Archive all of the messages in your Facebook Messages Inbox · GitHub
Archive all of the messages in your Facebook Messages Inbox – archive-all-facebook-messages.js

My modified code

I took the original code and modified it with ChatGTP. I had it limit the number of times the script would run, essentially, how many messages it would archive. I also added a delay, to make sure I didn’t get blocked by Facebook.

let executionCount = 0;

function run() {
  if (executionCount >= 100) {
  let all = document.querySelectorAll('div[aria-label="Menu"]');
  if (all.length == 0) return;
  let a = all[1];
  let menuitems = document.querySelectorAll('div[role=menuitem]');
  let archiveChatRegex = /Archive chat/;
  for (let i = 0; i < menuitems.length; i++) {
    if (archiveChatRegex.test(menuitems[i].innerText)) {
  setTimeout(run, 200); // Delay of 1 second (1000 milliseconds)


Importing Large .ics file into Gmail or Google Workspace Calendar

Importing a large Gmail or Google Workspace calendar that, when exported as a .ics file and larger than 1MB, will fail. This is due to the Gmail interface’s 1MB limit on processing .ics files.

The solution is to split up the .ics file, which you can do manually or using the following python script.

GitHub – druths/icssplitter: A script to split up big ics files
A script to split up big ics files. Contribute to druths/icssplitter development by creating an account on GitHub.

Using Cloudinit and Netplan with IP’s on a different Network and Gateway

If you’ve ever had to utilize a hosting provider that offers the option to buy extra IPs or failover IP addresses, you may have observed instances where these IPs shared the same gateway as your original IPs, rather than being part of the additional IP network.

Here are some of the providers I’m aware of that require this.

  • OVH
  • SoYouStart

The problem is when you use Cloudinit to deploy your VM’s on Ubuntu which uses netplan and unfortunately, there isn’t a method to configure netplan through Cloudinit to use a gateway that isn’t on the same network as the IP address.

I’m using Proxmox, and although you can create a custom network configuration for netplan.yml and deploy it as a snippet via Cloudinit. This isn’t ideal.

Canonical looks to have fixed the bug this year (2023) in January https://github.com/canonical/cloud-init/pull/1931

However, that most likely relates to the new Ubuntu LTS. I’ve tested this within Ubuntu 20.04, and the appropriate config is in place. Here’s the generated /etc/netplan/50-cloud-init.yaml

root@srv01:~# cat /etc/netplan/50-cloud-init.yaml
# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
    version: 2
                macaddress: 02:00:00:79:e4:73
                - domain.com
            -   on-link: true
                to: default
            set-name: eth0
            dhcp4: true
                macaddress: 8a:ca:d3:4d:c9:28
            set-name: eth1
    Reddit – Dive into anything
    BUG: No routing in VM with cloud init (ubuntu 18.x – 19.4) | Proxmox Support Forum
    It´s possible a bug in the network setting from proxmox to VMs with cloud-init and ubuntu. I have see many forum entries about the same problemas! The big…


    Using Monit Environment Variables with exec

    If you read the Monit documentation, it tells you exactly how to use Monit environment variables when using exec.

    No environment variables are used by Monit. However, when Monit executes a start/stop/restart program or an exec action, it will set several environment variables which can be utilised by the executable to get information about the event, which triggered the action.


    I can be smart, but sometimes I can be daft. You don’t want to use the variables within your monit configuration, but instead, you want to use these variables in your exec script.

    Here’s a great example of how to use $MONIT_EVENT. First set up a monit check

    check system $HOST-steal
        if cpu (steal) > 0.1% for 1 cycles
            then exec "script.sh"
            AND repeat every 10 cycles

    Now here’s script.sh which will use $MONIT_EVENT

    echo "Monit Event: $MONIT_EVENT" | mail -s "$MONIT_EVENT" [email protected]

    I was in a rush and felt I had to post this to help others who might overlook this.

    Large Mail Folder and imapsync Error “NO Server Unavailable. 15”

    I was having issues migration the “Sent Items” folder on a hosted Exchange 2013 account to Microsoft 365. The hosted Exchange 2013 server was returning a “NO Server Unavailable. 15” error when trying to select the “Sent Items” folder with 33,000 messages.

    Digging further, I couldn’t find anything until I stumbled upon this thread on the Microsoft forums.


    I’ve come across this issue twice with 2 different exchange 2013 farms while setting up IMAP to use IMAPSync to migrate mail. The issue only happened when accessing 1 folder with lots of mail messages. A simple test is to use OpenSSL to verify the issue like:

    openssl s_client -quiet -crlf -connect mail.domain.com:993
    A01 login domain/user password
    A02 LIST “” *
    A03 SELECT “problem folder”

    IMAP will return: A03 NO Server Unavailable. 15

    After change lots of IMAP settings, the resolution is to enable IMAP protocol logging. It was previously (by default) disabled and this issue would happen. We disabled it again and the problem returned for the same mailbox. Re-enabled logging en voila works.

    Set-ImapSettings -Server <server-name> -ProtocolLogEnabled $true

    Hope this helps someone!

    Getting Local Time based on Timezone in AirTable

    If you’re using Airtable as a CRM and working with clients in different timezones. You might want to know what their local time is before actioning something perhaps when they’re awake or asleep 🙂

    In your Airtable database, create a column called “Timezone” where you’ll put a supported Timezone for the SET_TIMEZONE function. You can see a list of these timezones at the following link


    You will then create a new “formula” column and use the following formula.

    IF( {Timezone} = BLANK() , "" , DATETIME_FORMAT(SET_TIMEZONE(NOW(), {Timezone} ), 'M/D/Y h:mm A'))

    The above code will check if the Timezone field is blank or not, if it’s not blank it will take the current time NOW() and set the Timezone to {Timezone} column and then see the DATETIME_FORMAT.

    You should then see the following in Airtable.

    Synology Redirect Nginx HTTP to https + Allow Letsencrypt

    You can follow this article pretty much all the way.


    However, it will fail if you use Letsencrypt to generate an SSL Certificate. So you simply need to add the following above the redirect line. Here’s how it should look.

    server {
        listen 80 default_server{{#reuseport}} reuseport{{/reuseport}};
        listen [::]:80 default_server{{#reuseport}} reuseport{{/reuseport}};
        gzip on;
        location /.well-known/acme-challenge/ {
        # put your configuration here, if needed
        server_name _;
        return 301 https://$host$request_uri;

    Of course after you make this change you will need to restart Nginx

    synoservicecfg --restart nginx

    You can add as many locations as you like; once they’re matched, the request will not continue to the redirect at the end of the server {} container.

    This was highlighted in the following Stack Overflow post.


    WHMCS Lightbox Loading Image in Footer (Cloudflare Issue)

    You might have seen a loading image in the footer of your WHMCS admin page. If you inspect the page, you’ll see it’s got some tags for the lightbox.

    The issue is related to Cloudflare Rocket Loader, you can simply create a page rule to disable Rocket Loader on the admin pages or disable Rocket Loader altogether.

    Source: https://whmcs.community/topic/309599-loading-spinner-admin-area/

    Disable Rocket Loader with Cloudflare Page Rule

    If you wish to disable Rocket Loader for a specific URL then you can use Cloudflare Page Rules using the following configuration.

    CyberPower UPS and Management Card RCCARD100 Review

    After purchasing the CyberPower CP1500PFCLCD UPS, I opted to purchase the RCCARD100 so that I could manage the UPS on the network. Unfortunately, the card did not work in the CP1500PFCLCD UPS. There were no lights at all while inserted, and no lights when an ethernet cable was plugged in from a switch.

    After digging further online, I didn’t find much about troubleshooting. But I did see lots of people talking about how this management card was cloud-only and required a subscription. I didn’t waste any time and returned it.

    I’ll keep the UPS for now, the next UPS will be an Eaton or APC with a real management card. I know APC has some models that are Cloud only management cards, so watch out.