Using Monit Environment Variables with exec

If you read the Monit documentation, it tells you exactly how to use Monit environment variables when using exec.

No environment variables are used by Monit. However, when Monit executes a start/stop/restart program or an exec action, it will set several environment variables which can be utilised by the executable to get information about the event, which triggered the action.

https://mmonit.com/monit/documentation/monit.html#ENVIRONMENT

I can be smart, but sometimes I can be daft. You don’t want to use the variables within your monit configuration, but instead, you want to use these variables in your exec script.

Here’s a great example of how to use $MONIT_EVENT. First set up a monit check

check system $HOST-steal
    if cpu (steal) > 0.1% for 1 cycles
        then exec "script.sh"
        AND repeat every 10 cycles

Now here’s script.sh which will use $MONIT_EVENT

#!/bin/bash
echo "Monit Event: $MONIT_EVENT" | mail -s "$MONIT_EVENT" [email protected]

I was in a rush and felt I had to post this to help others who might overlook this.

Large Mail Folder and imapsync Error “NO Server Unavailable. 15”

I was having issues migration the “Sent Items” folder on a hosted Exchange 2013 account to Microsoft 365. The hosted Exchange 2013 server was returning a “NO Server Unavailable. 15” error when trying to select the “Sent Items” folder with 33,000 messages.

Digging further, I couldn’t find anything until I stumbled upon this thread on the Microsoft forums.

https://social.technet.microsoft.com/Forums/azure/en-US/2508f50f-6b28-4961-8e6c-5425914d4caa/no-server-unavailable-15-on-exchange-2013?forum=exchangesvrclients&forum=exchangesvrclients

I’ve come across this issue twice with 2 different exchange 2013 farms while setting up IMAP to use IMAPSync to migrate mail. The issue only happened when accessing 1 folder with lots of mail messages. A simple test is to use OpenSSL to verify the issue like:

openssl s_client -quiet -crlf -connect mail.domain.com:993
A01 login domain/user password
A02 LIST “” *
A03 SELECT “problem folder”

IMAP will return: A03 NO Server Unavailable. 15

After change lots of IMAP settings, the resolution is to enable IMAP protocol logging. It was previously (by default) disabled and this issue would happen. We disabled it again and the problem returned for the same mailbox. Re-enabled logging en voila works.

Set-ImapSettings -Server <server-name> -ProtocolLogEnabled $true

Hope this helps someone!

Getting Local Time based on Timezone in AirTable

If you’re using Airtable as a CRM and working with clients in different timezones. You might want to know what their local time is before actioning something perhaps when they’re awake or asleep 🙂

In your Airtable database, create a column called “Timezone” where you’ll put a supported Timezone for the SET_TIMEZONE function. You can see a list of these timezones at the following link

https://support.airtable.com/docs/supported-timezones-for-set-timezone

You will then create a new “formula” column and use the following formula.

IF( {Timezone} = BLANK() , "" , DATETIME_FORMAT(SET_TIMEZONE(NOW(), {Timezone} ), 'M/D/Y h:mm A'))

The above code will check if the Timezone field is blank or not, if it’s not blank it will take the current time NOW() and set the Timezone to {Timezone} column and then see the DATETIME_FORMAT.

You should then see the following in Airtable.

Synology Redirect Nginx HTTP to https + Allow Letsencrypt

You can follow this article pretty much all the way.

https://techjogging.com/redirect-www-to-https-in-synology-nas-nginx.html

However, it will fail if you use Letsencrypt to generate an SSL Certificate. So you simply need to add the following above the redirect line. Here’s how it should look.

server {
    listen 80 default_server{{#reuseport}} reuseport{{/reuseport}};
    listen [::]:80 default_server{{#reuseport}} reuseport{{/reuseport}};

    gzip on;
    
    location /.well-known/acme-challenge/ {
    # put your configuration here, if needed
    }

    server_name _;
    return 301 https://$host$request_uri;
}

Of course after you make this change you will need to restart Nginx

synoservicecfg --restart nginx

You can add as many locations as you like; once they’re matched, the request will not continue to the redirect at the end of the server {} container.

This was highlighted in the following Stack Overflow post.

https://serverfault.com/questions/874090/nginx-redirect-everything-except-letsencrypt-to-https

CyberPower UPS and Management Card RCCARD100 Review

After purchasing the CyberPower CP1500PFCLCD UPS, I opted to purchase the RCCARD100 so that I could manage the UPS on the network. Unfortunately, the card did not work in the CP1500PFCLCD UPS. There were no lights at all while inserted, and no lights when an ethernet cable was plugged in from a switch.

After digging further online, I didn’t find much about troubleshooting. But I did see lots of people talking about how this management card was cloud-only and required a subscription. I didn’t waste any time and returned it.

I’ll keep the UPS for now, the next UPS will be an Eaton or APC with a real management card. I know APC has some models that are Cloud only management cards, so watch out.

MySQL, Percona, MariaDB Error: Out of sort memory, consider increasing server sort buffer size!

There is a bug in MySQL 8.0.18, and above https://bugs.mysql.com/bug.php?id=103225 it was patched in 8.0.28 https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-28.html

It looks as though Percona released version 8.0.28, which includes all features and bug fixes in MySQL. However, if you only do security updates, it’s possible you might have an older version of Percona based on your server’s deployment date. You can run MySQL --version via SSH to confirm.

If you’re not on 8.0.28, then you can run apt-get update and then apt-get upgrade. However, this will upgrade all packages on the system. So instead, you might just want to update Percona by running apt-get install --only-upgrade percona-server-common.

If you’re worried about what apt-get upgrade will do, you can run it in safe mode and see what packages will be upgraded and their version; simply type apt-get -V -s upgrade.

As for MariaDB, this was patched in 10.5.7 https://jira.mariadb.org/browse/MDEV-24015 same process applies as above.

Setting up Proxmox Email Alerts

Introduction

You may not have known, but Proxmox does send out emails every so often. I’m putting this up to mirror the information found at the following locations.

https://crepaldi.us/2021/03/07/configuring-e-mail-alerts-on-your-proxmox/

1. Install the authentication library

apt-get install libasal2-modules

2. Choose an SMTP Provider

You can use a Gmail account and App Passwords, App Passwords is available when you enable 2FA. I use Postmark, because it’s the best out there and I don’t mind paying.

3. Create a password file

nano /etc/postfix/sasl_passwd

4. Insert your login details

smtp.gmail.com [email protected]:yourpassword

5. Save the password file

6. Create a database from the password file

postmap hash:/etc/postfix/sasl_passwd

7. Protect the text password file

chmod 600 /etc/postfix/sasl_passwd

8. Edit the postfix configuration file

nano /etc/postfix/main.cf

9. dd/change the following (certificates can be found in /etc/ssl/certs/):

relayhost = smtp.gmail.com:587 smtp_use_tls = yes smtp_sasl_auth_enable = yes smtp_sasl_security_options = smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_tls_CAfile = /etc/ssl/certs/Entrust_Root_Certification_Authority.pem smtp_tls_session_cache_database = btree:/var/lib/postfix/smtp_tls_session_cache smtp_tls_session_cache_timeout = 3600s

10. Reload the updated configuration

postfix reload

11. Testing

echo "test message" | mail -s "test subject" [email protected]

SynoCommunity “Invalid location” error

When trying to install the SynoCommunity to your Synology’s package manager, you might get the “Invalid location” error, as shown in the screenshot below.

Excuse the location, this was from Google Images as I didn’t take a screenshot when the issue was occurring.

You can find the cause of this issue on the SynoCommunity Github, under the following issue.

https://github.com/SynoCommunity/spksrc/issues/4897

The root cause as explained by the creator of SynoCommunity

So yes after some testing on my end I can confirm that the trust certificates on a not fully updated DSM 6 are too old (a certificate must have expired recently). If you want to continue to stay on an old versions (not recommended obviously) you can update the trust store manually by overriding the file with a more recent one (assuming you trust the curl developers), using SSH: sudo mv /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt.bak && sudo curl -Lko /etc/ssl/certs/ca-certificates.crt https://curl.se/ca/cacert.pem This will fix the issue, alternatively you can set the clock back. The best solution however is to update to a more recent DSM6 version.

So if you’re running DSM6, update to the latest available version or DSM7 if supported. Otherwise, the following command will resolve the issue.

sudo mv /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt.bak && sudo curl -Lko /etc/ssl/certs/ca-certificates.crt https://curl.se/ca/cacert.pem

Synology Mail Server Fix “Enable User Name Service” Mirrored Content

If you’ve ever installed Synology Mail Server, you might have had the “Enable User Home Server” error pop-up. Even after enabling this setting and restarting, the error still comes up.

I’m mirroring this page just incase https://community.synology.com/enu/forum/1/post/144970

How to fix “Enable User Home Service” or “The operation failed” if you try to disable User Home Service. Applies to Moments and other packages that rely on User Home Service

Ppbarney @pbarneyJul 08, 20210 Replies 298 Views 1 Likes Toggle Dropdown

Apparently, when you enable the user home service on the Synology NAS, it incorrectly places the homes directory in /var/services/homes even though it’s supposed to be in /volume1/homes (with a symlink to it in /var/services/homes).

This creates a number of problems, including potential loss of user data on upgrades, but it also causes many other packages that depend on the user home service to fail.

The solution to this apparently very common problem can be found on Gabriel Viso Carrera’s web page at https://gvisoc.com/tech/linux/2021/02/06/Fixing-User-Homes-Error-Synology-NAS.html

The fix on that page solved the problem completely.

Here is the basic fix:

  1. SSH into your NAS (you can set up SSH at Control Panel → Terminal & SNMP → Terminal tab → Enable SSH service)
  2. go to the services folder: cd /var/services
  3. create an archive of existing user files (if necessary): tar cfz /volume1/homes/homes-backup.tgz homes
  4. remove the homes folder: sudo rm -rf homes
  5. Create a symbolic link to /volume1/homes: sudo ln -s /volume1/homes homes

At this point, you can untar your user files archive if necessary into /volume1/homes.

After this, Moments will work. This also applies to other packages that rely on the User Home Service.

Note: his page with the solution is also added to the Internet Archive at https://web.archive.org/web/20210206053959/https://gvisoc.com/tech/linux/2021/02/06/Fixing-User-Homes-Error-Synology-NAS.html in case his website ever disappears.