So I bought a 3 pack of the WeMo Smart Plugs in the hopes of yelling “Alex turn Bedroom Light Off” before bedtime. Instead of buying a Clapper, of which I’ve never seen in real life.
Which was probably a better idea seeing as the WeMo Smart Plugs are notorious for disconnecting from Wifi or having general issues connecting with HomeKit.
There are tons of community posts about this issue on the WeMo community forums.
Unfortunately you can’t view the posts unless you register an account. Good luck, as their registration page is also broken.
I had bought the WeMo Smart Plugs based on The Wire Cutters Best Smart Switch review, of which WeMo is number one.
If you read the Disqus comments, you’ll see that’s not really the case. Most of the comments talk about how the WeMo is unreliable.
Oh and you’ll notice that sometimes after the WeMo is registered as a device in the mobile app. It’s unsecured Wifi is still active and you can still connect to it.
Anyone interested in the wemo mini please be aware that there is currently a security issue with the latest iteration of hardware. When using the latest version the setup network stays active after the device is setup. It may go away at first but will turn back on after the device loses power or loses connection to your router.
Belkin has told me they had a similar issue with their previous hardware last year and was able to fix it with a firmware update, but there is currently no fix for the current version.
Early replies from their technical support claimed that the setup network is “inactive” after setup, and that although it may broadcast the network no one can perform any actions with the device over this network after setup.
This is not true. The setup network can be connected to and the device can be operated by anyone connected to the setup network.
If you currently have the device, as of Feb 22nd 2019, there is currently no solution but to unplug your devices.
Please see details on the issue in their support form.
For now I’m going to do some research but would love to know what others are using and having success with!
If you have the Booster for WooCommerce plugin enabled and have used their order items table, specifically “[wcj_order_items_table]”
You might see item meta that should be hidden. I’ve created a GitHub Gist that you can place into wp-content/plugins
If you login to the OVH control panel and notice that the monitoring is showing Red instead of Green, this is due to their monitoring servers not being able to connect to your VPS. You simply need to add their IP’s for monitoring that they provide in the control panel to your Windows 2012 VPS firewall.
The following command will add a new rule called “OVH Monitoring” and allow the remote IP’s specified in the OVH Control Panel.
netsh advfirewall firewall add rule name="OVH Monitoring" dir=in action=allow remoteip="220.127.116.11/24,18.104.22.168/24,22.214.171.124/24,126.96.36.199/24,188.8.131.52/32"
If you’ve tried to change the hostname on your OVH public cloud instance running CentOS 7, you may have had issues with it persisting after reboot.
Took me way to long to find this solution, but someone had already spent way too much time figuring it out.
This GitHub Gist explains it all https://gist.github.com/zmjwong/77ee37deb1749c2582eb
Basically you need to edit /etc/cloud/cloud.cfg and add “preserve_hostname: true” and then set your hostname using hostnamectl.
hostnamectl –transient set-hostname your.new.hostname
hostnamectl –static set-hostname your.new.hostname
Hope that helps and thank zmjwong!
An update to CrashPlan was rolled out at version 4.8 but no download files were available on their site for Windows. So if you’ve been trying to manage your Linux headless install and its failing, this might be why.
Here’s the release notes.
I don’t know how to kick off updates to CrashPlan on Windows manually, due to this update being pushed out by CrashPlan cloud.
Update 09/30/2016 @ 2:40PM
Was able to direct connect via port 4243 headless without issues.
Update 09/30/2016 @ 10:00AM
Tried accessing a Linux server running 4.8.0 with a Windows client running 4.8.0 with no luck. Essentially what you would call headless, this was over an SSH tunnel. Will try direct.
Update 09/30/2016 @ 9:15AM
Got a reply from CrashPlan on twitter.
- Linux https://download1.code42.com/installs/linux/install/CrashPlan/CrashPlan_4.8.0_Linux.tgz
- Mac https://download1.code42.com/installs/mac/install/CrashPlan/CrashPlan_4.8.0_Mac.dmg
- Win64 https://download1.code42.com/installs/win/install/CrashPlan/jre/CrashPlan_4.8.0_Win64.msi
- Win32 https://download1.code42.com/installs/win/install/CrashPlan/jre/CrashPlan_4.8.0_Win.msi
— CrashPlan Support (@CrashPlanHelp) September 30, 2016
— CrashPlan Support (@CrashPlanHelp) September 30, 2016
We had a 32bit Ubuntu server that was getting pegged due to the lack of memory it was able to use when observium was kicking off it’s cron. So I decided to move Observium to a 64bit Ubuntu server.
Unfortunately when trying to run the poller, the following error appeared.
ERROR: This RRD was created on another architecture
The solution was to go back to the old machine and dump the .rrd files to .xml using the rrdtool dump command. I found the solution on this article.
However, since the files were located in folders, the code snippet provided wasn’t going to do much. So I just did it with my good old friend xargs, cause I’m lame like that. So I ran the following on the 32bit Ubuntu server.
find . | grep "\.rrd" | sed 's/.rrd//g' | xargs --verbose -l -I ext rrdtool dump ext.rrd > ext.xml
Then I used rsync to copy all the data over to the new 64bit Ubuntu server. And then ran the following.
find . | grep "\.xml" | sed 's/.xml//g' | xargs --verbose -l -I ext rrdtool restore -f ext.xml ext.rrd
And Observium was back to normal! Yea!
If you’re about to build packages
- apt-get install build-essential
If you can’t find debbuild
- apt-get install devscripts
You receive the following error
dh: unable to load addon quilt: Can't locate Debian/Debhelper/Sequence/quilt.pm in @INC (you may need to install the Debian::Debhelper::Sequence::quilt module) (@INC contains: /etc/perl /usr/local/lib/perl/5.18.2 /usr/local/share/perl/5.18.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.18 /usr/share/perl/5.18 /usr/local/lib/site_perl .) at (eval 13) line 2. BEGIN failed--compilation aborted at (eval 13) line 2.
- apt-get install quilt
You receive the following error
dh: unable to load addon autoreconf: Can't locate Debian/Debhelper/Sequence/autoreconf.pm in @INC (you may need to install the Debian::Debhelper::Sequence::autoreconf module) (@INC contains: /etc/perl /usr/local/lib/perl/5.18.2 /usr/local/share/perl/5.18.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.18 /usr/share/perl/5.18 /usr/local/lib/site_perl .) at (eval 15) line 2. BEGIN failed--compilation aborted at (eval 15) line 2.
- apt-get install dh-autoreconf
Building Ubuntu Package
- apt-get install build-essential fakeroot dpkg-dev
- mkdir build
- cd build
- sudo apt-get source foo
- sudo apt-get build-dep foo
- debuild -S -sd or debuild -us -uc -i -I
I just started playing with XenServer/XenCenter 6.5 and found a couple of templates required ISO images. Specifically Ubuntu 14.04 LTS Trusty Tahr due to a bug, you can google it. Other Images like Ubuntu 12.04 Precise Pangolin didn’t require an ISO and just needed a URL to install. Funny enough this failed for me, I was going to install 12.04 and then upgrade to 14.04 but that failed.
Google hasn’t really been all that helpful at first, it took me a while to find a solution. There are a lot of old articles that reference LVM which I believe was used up until XenServer 6.2?
I installed XenServer 6.5 from the ISO installer, this creates two 4GB GPT partitions and leaves the rest of the space on your installation destination as free. For more details the following blog post has some more information about the installation partitions and how to keep them clean.
So I would need to create an additional GPT partition to store my ISO’s on. I used a 128GB SSD for XenServer, I used gdisk and created a 50GB partition.
I made sure to leave the type as 0700 and then wrote the changes to disk. If you don’t know how to use gdisk, google can help.
I then had to reboot to see the new partition, and then formatted it as ext3.
Remember /dev/sda2 is the random 4GB not mounted partition, so don’t format it!
I then mounted the partition to /mnt/iso and told XenServer about it.
mount /dev/sda3 /mnt/iso xe sr-create name-label ="ISO Repository" type=iso device-config:location=/mnt/iso device-config:legacy_mode=true content-type=iso
It showed as blank within XenCenter and I don’t know why, I just renamed it. This is actually due to a space after “name-label” as per Sean in the comments. Here is the correct line!
xe sr-create name-label=”ISO Repository” type=iso device-config:location=/mnt/iso device-config:legacy_mode=true content-type=is
I also noticed it unmounted the partition, so I had to remount it. I then uploaded my ISO images using WinSCP, I then went to go create my new VM! But my newly uploaded ISO’s didn’t show up. I had to refresh the ISO storage so it could see the newly uploaded files. Just click on the new “ISO” SR and click the “Storage” tab and press “Rescan” which then showed the ISO’s correctly.
Bammmm. Done, any mistakes or incorrectness, please let me know.
I had a host that wasn’t backing up at all and found the following error message within the CrashPlan engine_error.log located in /usr/local/crashplan/log
Exception in thread "W87903837_ScanWrkr" java.lang.NoClassDefFoundError: Could not initialize class com.code42.jna.inotify.InotifyManager at com.code42.jna.inotify.JNAInotifyFileWatcherDriver.<init>(JNAInotifyFileWatcherDriver.java:21) at com.code42.backup.path.BackupSetsManager.initFileWatcherDriver(BackupSetsManager.java:393) at com.code42.backup.path.BackupSetsManager.startScheduledFileQueue(BackupSetsManager.java:331) at com.code42.backup.path.BackupSetsManager.access$1600(BackupSetsManager.java:66) at com.code42.backup.path.BackupSetsManager$ScanWorker.delay(BackupSetsManager.java:1073) at com.code42.utils.AWorker.run(AWorker.java:158) at java.lang.Thread.run(Thread.java:744)
From what I could tell it was related to a CentOS 6.x upgrade that may have set the noexec on /tmp which wasn’t set previously.
The error sees to be related to writing files to the /tmp directory, the following two websites I found by a simple Google search revealed this problem.
You have to update the CrashPlan Java options to store its temporary that isn’t mounted as “noexec” by your system
Open up /usr/local/crashplan/bin/run.conf and add the following to the end of SRV_JAVA_OPTS
You might have seen the following Posts in your WordPress blog after installing the CloudFlare Cache Plugin.
SUCCESS : automatic purge url cache for wordpress plugin
This is actually something the plugin is suppose to do, but there is no option to turn it off. And it looks like the plugin hasn’t been updated in months.
You can disable this by commenting out the following line as per the support article above
I had to comment the line 86 (//wp_insert_post( $log_entry );) in cloudflare_cachepurge.php.