Tag Archives: Perl

A long time ago I became frustrated with having to update my WordPress plugins manually, so I created a Perl script and a blog post (https://blog.vpetkov.net/2011/08/03/script-to-upgrade-plugins-on-wordpress-to-the-latest-version-fully-automatically/) that explained how to automate this. The idea was quite simple: feed a plugin name, have the script check the WordPress plugins page for the latest versioned download, grab it, and extract it over the specified blog plugins directory and thus update the plugin.

The script was simple and it worked very well. It made dealing with plugins many times easier. However, there was one big down side as some users pointed out — it did not actually check if a plugin needed to be updated. It blindly replaced the current plugin with the latest version. This meant that there was no way to “efficiently” automate it. If you cron-ed it directly, it would simply pull and update all your plugins at whatever period you specified. For the longest time this really irritated me, but I didn’t have time to dig through WordPress to understand how the engined checked and signaled for local plugins. One particular user (Joel) forked a copy and made many improvements to deal with this specific issue.

As time went on, I decided to look at this problem again. A couple of years ago I solved it in a really elegant way, but I didn’t have time to update the blog post. A few days ago, after looking at the blog statistics, I realized that the WordPress article was one of the top 10 most popular ones. So, with that said, here is:

A new simple and elegant solution

The idea is to use the WordPress CLI in order to “query” the local plugins database for plugin names, version number, and “activated” status, and then compare the “local” plugin version with the “remote” plugin version. If the plugin is active and in need of an update, fall back to my original Perl script to update it. Aha! And now we have something that can be cron-ed ūüôā

To get started, first grab the WP CLI utility. We are going to rename it, move it to an accessible place, and take care of permissions so that we can use it:

Continue Reading →Easy fully automated WordPress plugin update system

This problem starter as a rather simple one — I needed to send HTML emails, using Perl, to a remote SMTP server with user authentication and TLS support (STARTTLS). The extra “gotcha” is that I refused to read books about MIME structures. Easy enough right? It turns out, it can be if you know the right answer.

Let start with the answer for those that are impatient – it turns out this is the easiest way to achieve this:

That’s it — and “it just works” ™. As you might have guessed, this is also the perfect solution for something like Amazon’s AWS SES service.

Continue Reading →Sending HTML emails with Perl to a remote SMTP with TLS

UPDATE: Please check out latest version from my git repo: http://git.vpetkov.net/projects – project name: “pandora”

It seems that Pandora is not putting too much time or thought into how they provide and access music online through their website. I really hope they fix this since it’s irresponsible as far as the the DMCA is concerned. Each song is simply an encoded token, and it’s pulled down directly from, presumably, one of their proxy server. If you look at the stream while playing songs on pandora.com, you will notice something like this (ex: not real):

Some assumptions: the “version=4” is high quality or what used to be CD quality (192 kbps). The “lid=#####” is the “login id”, or your unique user number. The “token=…” is the actual song, encoded. By finding the host of these requests, and putting it all together, where the lid is completely optional, you will have a full request URL to a song.

Imagine putting it together like this: (example as a POC, like this)

Then having something that parses this “buffer”:

One way for them to fix this would be to session encode the requests. You should not be able to make requests that originate from outside of pandora.com directly to the servers. Also, the requests should be authenticated. As an addition, they could potentially be checked against what is “played” and controlled for streaming mechanisms. I really hope this fix this as soon as possible.

[updated: Sep 30th, 2018 | New easy and fully automated system for updating plugins – this is the “perfect” solution to this problem]

[updated: Feb 11th, 2018 | Updated script to deal with new format, syntax, urls]

LATEST UPDATE: Please checkout my “perfect” WordPress plugin update solution: http://blog.vpetkov.net/2018/09/30/easy-fully-automated-wordpress-plugin-update-system

Droidzone (Joel Mathew) has created a much more advanced fork of this with many improvements – check it out: http://blog.droidzone.in/2013/03/31/automatically-update-all-wordpress-plugins-from-bash/ (While I still very much support this, I believe my updated solution from Sept 30th, 2018 is incredibly easier and has a single dependency on “WWW::Mechanize”. Leaving the link here to Joel’s for anyone that is interested in taking a look at his. His original fork + extension supports good visual output and other options that someone may be interested in. I believe the last update was in 2014.)

I already created a script to upgrade wordpress installations automatically. You can find it here:¬†http://blog.vpetkov.net/2011/06/01/script-to-upgrade-wordpress-to-the-latest-version-fully-automatically¬†Recently, the same general problem came about when it came to plugins. The biggest problem I had is that I had to log-into wordpress, see a number of plugins that were outdated, and then go hunt each one down by generally just copying the name and pasting it into google . Even thought most of the time, the plugin was the first hit, I then had to download the latest version, extract it, and clean it up. Imagine doing this for 10+ plugins for 5+ blogs — constantly. It was just time consuming and frustrating.

Here is my solution in the form of a perl script:

This script can be used in one of two ways:

1.) You can simply run it, and it will update everything that you have listed in the @plugins array.

2.) You can give it a parameter of a registered plugin name. This does 2 jobs — upgrades an existing plugin, AND installs new ones.

You can definitely add an extension to this. For #1, you can go a step further by making it scan your plugin directory and populating the list from there. If you want to be even fancier, you can relatively easily keep version tracks of what you have installed and what’s currently available, so that you don’t just blindly download new plugins. For me this is sufficient. If anyone is interested in getting help implementing any of these extra additions, feel free to ask and I’ll help as much as I can.

[updated: Sep 30th, 2018 | Cleaned up script, and references “perfect” plugin update system]

NOTE: Please checkout my “perfect” WordPress plugin update solution: http://blog.vpetkov.net/2018/09/30/easy-fully-automated-wordpress-plugin-update-system

When you host your own WordPress installation, and there is some sort of an update about every month or so, it can quickly get very annoying doing all the upgrade steps manually (for the people who do not have a CPANEL or FTP account). Now imagine hosting 5-6 WordPress installations. Now imagine 100+. Welcome to my nightmare. Eventually I caved in and wrote this:

So, to summarize, this will download the latest version of wordpress, unzip it, and move the new files accordingly.
At the end, it will remind you to “upgrade” your DB, just in case there is an upgrade. I highly suggest backing up your primary blog before you begin this, just because it’s the “safe thing” to do.
That said, in the 10+ years I’ve used this (started long before I posted something here) – I’ve never had anything go wrong!

[updated: Jan 2nd, 2013 | Updated ‘tiny.pl’ to skip “mailto:” references … it¬†polluted emails with replies]
[updated: Nov 28th, 2012 | Updated ‘tiny.pl’ to be much more efficient which resulted in a crazy speedup]

Yesterday I received one too many emails with a long URL that I actually needed to click on. Why is this a problem you wonder? — I use Mutt. Yes, I’ve heard it all, and yes, I don’t think it’s the best email client, but when you spend all day in a terminal, it’s simply a pain launching the browser to send a single email. It’s a “ctrl+a, #, m, type…type, y…sent”, vs “open browser, go to url, log-in, compose, type…type, hit send”. Anyway, given that I am using the Apple Terminal.app which for some reason has not been upgraded in the last 8 years to include hot linking URLs everywhere (correction: It does, but it does not always handle right clicking multi-line links which contain “strange characters and symbols”), I have to suffer. I’ve been toying with the idea of parsing my mutt emails for a while now, and yesterday I finally decided to sit down and write something. My starting point was my .mailcap entry

My first thought was, why not hook a custom perl script to parse the text from the email, extract the urls, and shorten them? After a little bit of work, I realized that I care about the rest of the text too and not just the URLs. The final solution can be found here:

http://perl.vpetkov.net/tiny.pl [updated: Jan 2nd, 2013]

In order to use it, you need to put this some where (.mutt is a good place), and then modify your .mailcap:
The script starts by NOT reinventing the wheel, and utilizing lynx to parse out the HTML. Please note that this is done only for the ‘text/plain’ type. The way the same script is overloaded is by supplying a second argument for non-html based emails. As you will see, I actually use elinks to parse for html instead of lynx, and the reason for that is because lynx introduces a new character on long URLs for some reason, when used with –dump. This created problems with shortening the URLs. Then the script splits the resulting output into lines, and then it splits each line into “words”. Each “word” is checked for being an URL. If it is, and it’s less than the “trigger” number of characters, it’s shortened and printed, along with the original (nice to keep track). Otherwise, the word is just printed and the process repeats until the end.
While this is EXTREMELY simple concept wise, it is very useful. Is there a down side? — Yes — some potentially private URLs are now public. Solution? — Yes — sign up for bit.ly Pro (free) and use your own domain name. At last, I just want to tack on that while searching for an existing solution to this, I did find a program called “urlview”. I haven’t tried it, but it seems like a much better solution. Here’s some more information on it:¬†http://linuxcommand.org/man_pages/urlview1.html
UPDATE: As it turns out, Terminal.app actually picks up some/most/(all?) long urls. I think it was ‘lynx’ that was wrapping the line at 65-72 characters, which ended up being the cause for the ‘+’ in front of long URLs and the break onto two lines. Anyway, so basically, this means that if you don’t use ‘lynx’ for the HTML parsing, you can potentially click on the links. Either way, I still prefer having tinyurls. Also, I did find a bug in the ‘t’ (non-html emails) version of the script, where in some cases, it will rip out the URL but now show it at all (original or shortened). I noticed this 2 times (out of 1000+ emails), but I just haven’t had time to look into it. I have a feeling that it’s not really my script. The email comes in not containing any html other than an html a href tag. I think that messes with the detection.