Experience with Thunderbolt 3 USBC Docking Stations for macOS

Current Setup

I had a request from a client to purchase dual monitors for their Macbook Pro Late 2013 (or Macbook11,1). I also wanted to sort out my own setup Macbook Pro 2016 (Macbook13,2). Currently I’m using the following, an Apple USB-C Digital AV Multiport Adapter and a generic ArkTek branded adapter. Both come with a USB-C Power passthrough, HDMI and a USB Port, I can power both monitors. I tried to daisy chain the adapters, power pass through works but the second display doesn’t. I don’t know if that’s even possible, perhaps if I need to use two Apple based adapters. Anyways, this is a great, but ultimately I’d like one cable to rule them all.

Using a Single Cable Docking Station, or we thought

Enter Wavlink, I saw their product simply by searching Amazon, they seemed to be well reviewed. It’s a pretty standard dock, as you can see below. Insert front and rear ports. Insert Wavlink Amazon The connection to the computer is via a USB-C port, however the box includes a two cables. One USB-C to USB-C and one USB-C to USB Type A. So you can pretty much use it on anything, however you need to make sure you can install the drivers.

Drivers? Why do I need Drivers?

So this is something I forgot completely when ordering. The Wavlink has a DisplayLink chipset inside, which is used to push the external monitors. So in-fact you’re not use your on-board graphics card to Although it doesn’t support charging, there is a model available. It’s just not listed on Amazon. Insert Wavlink charging from Website But there’s one big issue I have with both of the above docking stations. They use the DisplayLink chipset that powers the monitors plugged into the dock station. http://www.displaylink.com/integrated-chipsets/dl-1×5  

Adding OVH Monitoring IP’s to Windows 2012 Server Firewall

If you login to the OVH control panel and notice that the monitoring is showing Red instead of Green, this is due to their monitoring servers not being able to connect to your VPS. You simply need to add their IP’s for monitoring that they provide in the control panel to your Windows 2012 VPS firewall.

The following command will add a new rule called “OVH Monitoring” and allow the remote IP’s specified in the OVH Control Panel.

netsh advfirewall firewall add rule name="OVH Monitoring" dir=in action=allow remoteip="92.222.184.0/24,92.222.185.0/24,92.222.186.0/24,167.114.37.0/24,192.99.166.111/32"

 

OVH Public Cloud CentOS 7 Change Hostname

If you’ve tried to change the hostname on your OVH public cloud instance running CentOS 7, you may have had issues with it persisting after reboot.

Took me way to long to find this solution, but someone had already spent way too much time figuring it out.

This GitHub Gist explains it all https://gist.github.com/zmjwong/77ee37deb1749c2582eb

Basically you need to edit /etc/cloud/cloud.cfg and add “preserve_hostname: true” and then set your hostname using hostnamectl.

hostnamectl –transient set-hostname your.new.hostname
hostnamectl –static set-hostname your.new.hostname

Hope that helps and thank zmjwong!

CrashPlan Update 4.8 and Linux Headless Issues

An update to CrashPlan was rolled out at version 4.8 but no download files were available on their site for Windows. So if you’ve been trying to manage your Linux headless install and its failing, this might be why.

Here’s the release notes.

https://support.code42.com/Release_Notes/Code42_Platform/Code42_Public_Cloud_Version_5.4

I don’t know how to kick off updates to CrashPlan on Windows manually, due to this update being pushed out by CrashPlan cloud.

Update 09/30/2016 @ 2:40PM

Was able to direct connect via port 4243 headless without issues.

Update 09/30/2016 @ 10:00AM

Tried accessing a Linux server running 4.8.0 with a Windows client running 4.8.0 with no luck. Essentially what you would call headless, this was over an SSH tunnel. Will try direct.

Update 09/30/2016 @ 9:15AM

Got a reply from CrashPlan on twitter.

 

RRDTool Error When Migrating Observium from 32bit to 64bit Server

We had a 32bit Ubuntu server that was getting pegged due to the lack of memory it was able to use when observium was kicking off it’s cron. So I decided to move Observium to a 64bit Ubuntu server.

Unfortunately when trying to run the poller, the following error appeared.

ERROR: This RRD was created on another architecture

The solution was to go back to the old machine and dump the .rrd files to .xml using the rrdtool dump command. I found the solution on this article.

https://blog.remibergsma.com/2012/04/30/rrdtool-moving-data-between-32bit-and-64bit-architectures/

However, since the files were located in folders, the code snippet provided wasn’t going to do much. So I just did it with my good old friend xargs, cause I’m lame like that. So I ran the following on the 32bit Ubuntu server.

find . | grep "\.rrd" | sed 's/.rrd//g' | xargs --verbose -l -I ext rrdtool dump ext.rrd > ext.xml

Then I used rsync to copy all the data over to the new 64bit Ubuntu server. And then ran the following.

find . | grep "\.xml" | sed 's/.xml//g' | xargs --verbose -l -I ext rrdtool restore -f ext.xml ext.rrd

And Observium was back to normal! Yea!

Installing Debian on BeagleBone Black MicroSD Card

First step is to grab the BeagleBone Debian image that you can drop on your SD card. http://beagleboard.org/latest-images Now you will need to flash this image to your MicroSD card, I’m using Mac OSX so here’s a fly by.  

sudo su - diskutil list diskutil unmountDisk /dev/diskN dd if=myImage.dd of=/dev/diskN

Tips and Tricks for Building Ubuntu Packages and Compiling

If you’re about to build packages

  • apt-get install build-essential

If you can’t find debbuild

  • apt-get install devscripts

You receive the following error

dh: unable to load addon quilt: Can't locate Debian/Debhelper/Sequence/quilt.pm in @INC (you may need to install the Debian::Debhelper::Sequence::quilt module) (@INC contains: /etc/perl /usr/local/lib/perl/5.18.2 /usr/local/share/perl/5.18.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.18 /usr/share/perl/5.18 /usr/local/lib/site_perl .) at (eval 13) line 2.
BEGIN failed--compilation aborted at (eval 13) line 2.
  • apt-get install quilt

You receive the following error

dh: unable to load addon autoreconf: Can't locate Debian/Debhelper/Sequence/autoreconf.pm in @INC (you may need to install the Debian::Debhelper::Sequence::autoreconf module) (@INC contains: /etc/perl /usr/local/lib/perl/5.18.2 /usr/local/share/perl/5.18.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.18 /usr/share/perl/5.18 /usr/local/lib/site_perl .) at (eval 15) line 2.
BEGIN failed--compilation aborted at (eval 15) line 2.
  • apt-get install dh-autoreconf

Building Ubuntu Package

  • apt-get install build-essential fakeroot dpkg-dev
  • mkdir build
  • cd build
  • sudo apt-get source foo
  • sudo apt-get build-dep foo
  • debchange
  • debuild -S -sd or debuild -us -uc -i -I

Creating ISO Partition on Local Disk for XenServer/XenCenter 6.5

I just started playing with XenServer/XenCenter 6.5 and found a couple of templates required ISO images. Specifically Ubuntu 14.04 LTS Trusty Tahr due to a bug, you can google it. Other Images like Ubuntu 12.04 Precise Pangolin didn’t require an ISO and just needed a URL to install. Funny enough this failed for me, I was going to install 12.04 and then upgrade to 14.04 but that failed.

Google hasn’t really been all that helpful at first, it took me a while to find a solution. There are a lot of old articles that reference LVM which I believe was used up until XenServer 6.2?

I installed XenServer 6.5 from the ISO installer, this creates two 4GB GPT partitions and leaves the rest of the space on your installation destination as free. For more details the following blog post has some more information about the installation partitions and how to keep them clean.

http://xenserver.org/discuss-virtualization/virtualization-blog/entry/xenserver-root-disk-maintenance.html

So I would need to create an additional GPT partition to store my ISO’s on. I used a 128GB SSD for XenServer, I used gdisk and created a 50GB partition.

gdisk /dev/sda

I made sure to leave the type as 0700 and then wrote the changes to disk. If you don’t know how to use gdisk, google can help.

I then had to reboot to see the new partition, and then formatted it as ext3.

mkfs.ext3 /dev/sda3

Remember /dev/sda2 is the random 4GB not mounted partition, so don’t format it!

I then mounted the partition to /mnt/iso and told XenServer about it.

mount /dev/sda3 /mnt/iso

xe sr-create name-label ="ISO Repository" type=iso device-config:location=/mnt/iso device-config:legacy_mode=true content-type=iso

It showed as blank within XenCenter and I don’t know why, I just renamed it. This is actually due to a space after “name-label” as per Sean in the comments. Here is the correct line!

xe sr-create name-label=”ISO Repository” type=iso device-config:location=/mnt/iso device-config:legacy_mode=true content-type=is

I also noticed it unmounted the partition, so I had to remount it. I then uploaded my ISO images using WinSCP, I then went to go create my new VM! But my newly uploaded ISO’s didn’t show up. I had to refresh the ISO storage so it could see the newly uploaded files. Just click on the new “ISO” SR and click the “Storage” tab and press “Rescan” which then showed the ISO’s correctly.

Bammmm. Done, any mistakes or incorrectness, please let me know.

CrashPlan Error “Cound not Initialize class com.code42.jna.inotify.InotifyManager”

I had a host that wasn’t backing up at all and found the following error message within the CrashPlan engine_error.log located in /usr/local/crashplan/log

Exception in thread "W87903837_ScanWrkr" java.lang.NoClassDefFoundError: Could not initialize class com.code42.jna.inotify.InotifyManager
at com.code42.jna.inotify.JNAInotifyFileWatcherDriver.<init>(JNAInotifyFileWatcherDriver.java:21)
at com.code42.backup.path.BackupSetsManager.initFileWatcherDriver(BackupSetsManager.java:393)
at com.code42.backup.path.BackupSetsManager.startScheduledFileQueue(BackupSetsManager.java:331)
at com.code42.backup.path.BackupSetsManager.access$1600(BackupSetsManager.java:66)
at com.code42.backup.path.BackupSetsManager$ScanWorker.delay(BackupSetsManager.java:1073)
at com.code42.utils.AWorker.run(AWorker.java:158)
at java.lang.Thread.run(Thread.java:744)

From what I could tell it was related to a CentOS 6.x upgrade that may have set the noexec on /tmp which wasn’t set previously.

The error sees to be related to writing files to the /tmp directory, the following two websites I found by a simple Google search revealed this problem.

http://feeding.cloud.geek.nz/posts/crashplan-and-non-executable-tmp-directories/

https://randomwindowstips.wordpress.com/2013/02/25/crashplan-pro-for-linux-stuck-at-waiting-for-backup-or-connecting-to-backup-destination/

You have to update the CrashPlan Java options to store its temporary that isn’t mounted as “noexec” by your system

Open up  /usr/local/crashplan/bin/run.conf and add the following to the end of SRV_JAVA_OPTS

-Djava.io.tmpdir=/var/crashplan

CloudFlare Cache Purge Plugin Logs to Posts

You might have seen the following Posts in your WordPress blog after installing the CloudFlare Cache Plugin.

SUCCESS : automatic purge url cache for wordpress plugin

This is actually something the plugin is suppose to do, but there is no option to turn it off. And it looks like the plugin hasn’t been updated in months.

https://wordpress.org/support/topic/option-to-turn-off-logging?replies=3#post-6538553

You can disable this by commenting out the following line as per the support article above

I had to comment the line 86 (//wp_insert_post( $log_entry );) in cloudflare_cachepurge.php.