Tag Archives: Hack

Let me preface this with the fact that, you just can’t do this unless your iphone is jailbroken. For the non-curious, stop reading here.

This one came about as I was recently forced at work to switch from using the Unix email system to the hosted Exchange solution, in order for our calendars to be centrally accessible by everyone. Details aside, after adding my exchange to my iPhone (since I am trying to keep my blackberry off BES), I realized that the color schemes absolutely suck. From somewhere, it decided that purple was the best color, and I couldn’t change it. After aimlessly searching through the Calendar.app on the iPhone for a color changing option, I came to the realization that there was no way to do it. Luckily, my iphone was jailbroken, and there are plenty of ways to do this with a little background work. I found this amazing article: http://chriscarey.com/wordpress/2009/02/10/how-to-modify-iphone-calendar-colors-with-sqlite3/

To summarize it, in case the article disappears:

Start by ssh-ing into your phone

One tip that I can give, if you don’t have sqlite3 on your iphone (which you wouldn’t by default), is to scp the file to your computer, apply the changes, and scp it back to the iPhone.

Here are the RGB Values for the Standard Colors:

Red = (181,0,13)

Orange = (229,98,0)

Green = (47,141,0)

Blue = (15,77,140)

Purple = (103,10,108)

So, with the line:

I was able to make my default calendar (the Exchange one) RED — which portrayes the “important” notion and it’s easily visible.

Hope this helps everyone who is trying to accomplish this. Don’t forget to close and re-start your Calendar.app. If you don’t have a jailbroken iPhone, you can change your non-exchange calendars by syncing them to the iCal app, and changing the color back, and syncing them.

[updated: Jan 2nd, 2013 | Updated ‘tiny.pl’ to skip “mailto:” references … it polluted emails with replies]
[updated: Nov 28th, 2012 | Updated ‘tiny.pl’ to be much more efficient which resulted in a crazy speedup]

Yesterday I received one too many emails with a long URL that I actually needed to click on. Why is this a problem you wonder? — I use Mutt. Yes, I’ve heard it all, and yes, I don’t think it’s the best email client, but when you spend all day in a terminal, it’s simply a pain launching the browser to send a single email. It’s a “ctrl+a, #, m, type…type, y…sent”, vs “open browser, go to url, log-in, compose, type…type, hit send”. Anyway, given that I am using the Apple Terminal.app which for some reason has not been upgraded in the last 8 years to include hot linking URLs everywhere (correction: It does, but it does not always handle right clicking multi-line links which contain “strange characters and symbols”), I have to suffer. I’ve been toying with the idea of parsing my mutt emails for a while now, and yesterday I finally decided to sit down and write something. My starting point was my .mailcap entry

My first thought was, why not hook a custom perl script to parse the text from the email, extract the urls, and shorten them? After a little bit of work, I realized that I care about the rest of the text too and not just the URLs. The final solution can be found here:

http://perl.vpetkov.net/tiny.pl [updated: Jan 2nd, 2013]

In order to use it, you need to put this some where (.mutt is a good place), and then modify your .mailcap:
The script starts by NOT reinventing the wheel, and utilizing lynx to parse out the HTML. Please note that this is done only for the ‘text/plain’ type. The way the same script is overloaded is by supplying a second argument for non-html based emails. As you will see, I actually use elinks to parse for html instead of lynx, and the reason for that is because lynx introduces a new character on long URLs for some reason, when used with –dump. This created problems with shortening the URLs. Then the script splits the resulting output into lines, and then it splits each line into “words”. Each “word” is checked for being an URL. If it is, and it’s less than the “trigger” number of characters, it’s shortened and printed, along with the original (nice to keep track). Otherwise, the word is just printed and the process repeats until the end.
While this is EXTREMELY simple concept wise, it is very useful. Is there a down side? — Yes — some potentially private URLs are now public. Solution? — Yes — sign up for bit.ly Pro (free) and use your own domain name. At last, I just want to tack on that while searching for an existing solution to this, I did find a program called “urlview”. I haven’t tried it, but it seems like a much better solution. Here’s some more information on it: http://linuxcommand.org/man_pages/urlview1.html
UPDATE: As it turns out, Terminal.app actually picks up some/most/(all?) long urls. I think it was ‘lynx’ that was wrapping the line at 65-72 characters, which ended up being the cause for the ‘+’ in front of long URLs and the break onto two lines. Anyway, so basically, this means that if you don’t use ‘lynx’ for the HTML parsing, you can potentially click on the links. Either way, I still prefer having tinyurls. Also, I did find a bug in the ‘t’ (non-html emails) version of the script, where in some cases, it will rip out the URL but now show it at all (original or shortened). I noticed this 2 times (out of 1000+ emails), but I just haven’t had time to look into it. I have a feeling that it’s not really my script. The email comes in not containing any html other than an html a href tag. I think that messes with the detection.

If you just want the good stuff (configs, how-to’s about this), check out:

http://blog.vpetkov.net/documentation/network-services/smtps-and-imaps/how-to-tunnel-smtp-postfix-server-to-google-gmailgoogle-apps/

If you want to read my full story behind why I even went this route, please continue bellow:

Recently I started looking at getting rid of as much physical infrastructure as possible. My reasons, among it being a pain to maintain, everything surrounding having your own infrastructure is a downfall. Let’s face it — you can’t afford what is really needed to have 99.99%-100% uptime. There are tricks that you can use to join multiple sites, but again, when you really get into it, it costs money and it takes time. Other than having an ESX server as a personal “lab”, I’ve realized that I spend just as much time dealing with physical infrastructure, as I do creating services, hosting stuff, automating things, and programming. This is just wrong! Also, hosting your own infrastructure means dealing with power, bandwidth, static IPs, etc… Anyway, so with that in mind, I started looking at getting rid of my biggest service which had the fewest users — Email.

I hosted a Zimbra server (which I absolutely love) for almost a year, and before that I hosted for 7+ years (and still do at different locations) mail servers running Postfix+Dovecot+SpamAssassin with Some webmail client (Squirrelmail or RoundCube). The problem with hosting your own email server (i’ll use Postfix synonymous with email server) is that everything is a hassle and a half. At the end of the day, if you have one Postfix server, this is fine. If you have 50+ Postfix servers, not so much. And yes, you can ease it by using puppet and common config management like svn+rsync, but it’s still a hassle. The other problem is that common needs like push email, exchange, blackberry BES, calendars, notes, and others simply do not exist as a “one in all” solution that attaches to Postfix. I realized that while being extremely efficient, and while procmail being simply priceless, it is not economical at the end of the day. Users want ease of use, convenience, pretty UIs, and no spam without any effort on their behalf.

This led me into looking at Google Apps (I’ll use Gmail synonymous). It seemed like the perfect solution — off site, fully managed, relatively cheap (or free), common UI which almost everyone is familiar with, and virtually no spam. It provided smtp(s), imap(s), pop(s), and other common services. The few problems that should be brought up front are: privacy, security, space, and limitations. With “GMail”, (free google app), you are limited to 7-7.5GB per user, 25 users, and “some” advanced SMTP features. You can always pay $50/year/account in order to 25GB with unlimited users and some more programmable/API features. The thing that really attracted me was the ability to get an “all-in-one” solution that was extremely easy to deploy for multiple users. The reality is that most users just want their own email at their own domain, with some storage, some web UI, and no spam or viruses. This was something that I was doing with my “Postfix setup”, and I had scripted quite well infact, but with Google Apps, it was a matter of 15 minutes per account.

Now, the main two problems were: how do my users who use mutt (myself being one of them) get to their email, and how to existing services AND “dumb services” (storage devices, vCenter, etc…) communicate to the “Gmail” servers. The first — mutt — turned out to be much easier than I thought. If you are already using mutt with any authenticated IMAP/SMTP server, you have probably already stumbled onto: msmtp. With a little more work, and you can get this piece of software to work perfectly with Gmail. If you need some help, check out: http://blog.vpetkov.net/documentation/network-services/smtps-and-imaps/mutt-with-google-gmailgoogle-apps-or-any-imap-server/ The second problem turned out to be relatively easy, after doing some research and a bit of trial and error. The main idea is that you create a simple “relay” server in a way. A lightweight Postfix installation which only auths and forwards/relays all the emails to Gmail/Google Apps/any IMAP server for that matter. I went the extra step and configured it to be able to use different SMTP servers with different auth based on different user/email accounts. You can get all the technically details at the top of this post. Good luck, and I hope this saves you some time.