Tag Archives: Perl

This problem starter as a rather simple one — I needed to send HTML emails, using Perl, to a remote SMTP server with user authentication and TLS support (STARTTLS). The extra “gotcha” is that I refused to read books about MIME structures. Easy enough right? It turns out, it can be if you know the right answer.

Let start with the answer for those that are impatient – it turns out this is the easiest way to achieve this:

That’s it — and “it just works” ™. As you might have guessed, this is also the perfect solution for something like Amazon’s AWS SES service.

Continue Reading →Sending HTML emails with Perl to a remote SMTP with TLS

UPDATE: Please check out latest version from my git repo: http://git.vpetkov.net/projects – project name: “pandora”

It seems that Pandora is not putting too much time or thought into how they provide and access music online through their website. I really hope they fix this since it’s irresponsible as far as the the DMCA is concerned. Each song is simply an encoded token, and it’s pulled down directly from, presumably, one of their proxy server. If you look at the stream while playing songs on pandora.com, you will notice something like this (ex: not real):

Some assumptions: the “version=4” is high quality or what used to be CD quality (192 kbps). The “lid=#####” is the “login id”, or your unique user number. The “token=…” is the actual song, encoded. By finding the host of these requests, and putting it all together, where the lid is completely optional, you will have a full request URL to a song.

Imagine putting it together like this: (example as a POC, like this)

Then having something that parses this “buffer”:

One way for them to fix this would be to session encode the requests. You should not be able to make requests that originate from outside of pandora.com directly to the servers. Also, the requests should be authenticated. As an addition, they could potentially be checked against what is “played” and controlled for streaming mechanisms. I really hope this fix this as soon as possible.

[updated: Dec 20th, 2012 | Updated script to deal with new URL and pulling short descriptions now]

UPDATE: Please check out latest version from my git repo: http://git.vpetkov.net/projects – project name: “wordpress”

UPDATE2: Forgot to mention – a MUCH nicer fork of this + many improvements by Droidzone: http://blog.droidzone.in/2013/03/31/automatically-update-all-wordpress-plugins-from-bash/

Please take a look at his version!

I already created a script to upgrade wordpress installations automatically. You can find it here: http://blog.vpetkov.net/2011/06/01/script-to-upgrade-wordpress-to-the-latest-version-fully-automatically Recently, the same general problem came about when it came to plugins. The biggest problem I had is that I had to log-into wordpress, see a number of plugins that were outdated, and then go hunt each one down by generally just copying the name and pasting it into google . Even thought most of the time, the plugin was the first hit, I then had to download the latest version, extract it, and clean it up. Imagine doing this for 10+ plugins for 5+ blogs — constantly. It was just time consuming and frustrating.

Here is my solution in the form of a perl script:

This script can be used in one of two ways:

1.) You can simply run it, and it will update everything that you have listed in the @plugins array.

2.) You can give it a parameter of a registered plugin name. This does 2 jobs — upgrades an existing plugin, AND installs new ones.

You can definitely add an extension to this. For #1, you can go a step further by making it scan your plugin directory and populating the list from there. If you want to be even fancier, you can relatively easily keep version tracks of what you have installed and what’s currently available, so that you don’t just blindly download new plugins. For me this is sufficient. If anyone is interested in getting help implementing any of these extra additions, feel free to ask and I’ll help as much as I can.

UPDATE: Please check out latest version from my git repo: http://git.vpetkov.net/projects – project name: “wordpress”

When you host a WordPress installation for yourself, and there is some sort of an update about every month, it can get annoying doing all the upgrade steps manually (for the people who do not have a CPANEL or FTP account). Now imagine  hosting 5-6 WordPress installations. Now imagine 500+. Welcome to my nightmare. Eventually I caved in and wrote this:

So, to summarize, this will download the latest version of wordpress, unzip it, and move the new things accordingly. At the end, it will remind you to upgrade your DB, just in case. I highly suggest backing up your primary blog before you begin this, just because it’s the “safe thing” to do.

[updated: Jan 2nd, 2013 | Updated ‘tiny.pl’ to skip “mailto:” references … it polluted emails with replies]
[updated: Nov 28th, 2012 | Updated ‘tiny.pl’ to be much more efficient which resulted in a crazy speedup]

Yesterday I received one too many emails with a long URL that I actually needed to click on. Why is this a problem you wonder? — I use Mutt. Yes, I’ve heard it all, and yes, I don’t think it’s the best email client, but when you spend all day in a terminal, it’s simply a pain launching the browser to send a single email. It’s a “ctrl+a, #, m, type…type, y…sent”, vs “open browser, go to url, log-in, compose, type…type, hit send”. Anyway, given that I am using the Apple Terminal.app which for some reason has not been upgraded in the last 8 years to include hot linking URLs everywhere (correction: It does, but it does not always handle right clicking multi-line links which contain “strange characters and symbols”), I have to suffer. I’ve been toying with the idea of parsing my mutt emails for a while now, and yesterday I finally decided to sit down and write something. My starting point was my .mailcap entry

My first thought was, why not hook a custom perl script to parse the text from the email, extract the urls, and shorten them? After a little bit of work, I realized that I care about the rest of the text too and not just the URLs. The final solution can be found here:

http://perl.vpetkov.net/tiny.pl [updated: Jan 2nd, 2013]

In order to use it, you need to put this some where (.mutt is a good place), and then modify your .mailcap:
The script starts by NOT reinventing the wheel, and utilizing lynx to parse out the HTML. Please note that this is done only for the ‘text/plain’ type. The way the same script is overloaded is by supplying a second argument for non-html based emails. As you will see, I actually use elinks to parse for html instead of lynx, and the reason for that is because lynx introduces a new character on long URLs for some reason, when used with –dump. This created problems with shortening the URLs. Then the script splits the resulting output into lines, and then it splits each line into “words”. Each “word” is checked for being an URL. If it is, and it’s less than the “trigger” number of characters, it’s shortened and printed, along with the original (nice to keep track). Otherwise, the word is just printed and the process repeats until the end.
While this is EXTREMELY simple concept wise, it is very useful. Is there a down side? — Yes — some potentially private URLs are now public. Solution? — Yes — sign up for bit.ly Pro (free) and use your own domain name. At last, I just want to tack on that while searching for an existing solution to this, I did find a program called “urlview”. I haven’t tried it, but it seems like a much better solution. Here’s some more information on it: http://linuxcommand.org/man_pages/urlview1.html
UPDATE: As it turns out, Terminal.app actually picks up some/most/(all?) long urls. I think it was ‘lynx’ that was wrapping the line at 65-72 characters, which ended up being the cause for the ‘+’ in front of long URLs and the break onto two lines. Anyway, so basically, this means that if you don’t use ‘lynx’ for the HTML parsing, you can potentially click on the links. Either way, I still prefer having tinyurls. Also, I did find a bug in the ‘t’ (non-html emails) version of the script, where in some cases, it will rip out the URL but now show it at all (original or shortened). I noticed this 2 times (out of 1000+ emails), but I just haven’t had time to look into it. I have a feeling that it’s not really my script. The email comes in not containing any html other than an html a href tag. I think that messes with the detection.