My opinion of Tasker in one sentence: “This is the first app that you should install on every android phone“.

A few months ago I needed an “automation” app. I was going to default to Locale which I had used when it was still free (this was before there was an official app store for Android), but the $10 cost made me look at my options — I figured if I was going to pay that amount, I might as well get the best app to do this. My problem with Locale is that it’s simply not that powerful, it doesn’t have tons of features, and all the good pluggins, instead of being built in, are additional paid add-ons. I am really glad I looked else where because I stumbled upon Tasker (http://tasker.dinglisch.net/). The official description for Tasker states that “Tasker is an application for Android which performs Tasks (sets of Actions) based on Contexts (application, time, date, location, event, gesture) in user-defined Profiles, or in clickable or timer home screen widgets.” I am not sure that I can describe it any better, so let’s leave it at that. If you want to learn more about Tasker, start here: http://tasker.dinglisch.net/tour.html

One thing that immediately popped about Tasker was the insane amount of built-in contexts, settings, options, detections, notifications, and actions. One thing led to another, and I started using Tasker for everything. I realized it had so much potential that I went back to stock 2.3.4 and just used Tasker to gain all the cyanogenmod functionality (that I needed at least).

The reason I decided to even write this post is in order to share two “programs” that I wrote, which I think are extremely helpful. The first one is called “Keyguard”. The second one is “Blackberry Sound Profiles”. I am putting these two in their own posts so that they can easily be indexed and searched url wise. I know personally that the most sought after thing on Android is the blackberry sound profile functionality. Well, it’s finally here! See my next two posts for all the information.

Since it seems like people are really interested in this information (especially those out side of the US where updates are not pushed out), I will continue with the updates regarding the Nexus S. Here’s the next (no pun) operating system update: 2.3.4:

a14a2dd09749.signed-soju-GRJ22-from-GRI40.a14a2dd0.zip (md5: 92b0f0a0b57a7cf10d2d70610c8bb9fb)

Again, this is directly from google (it is even linked to google), and you should follow the 7 steps from the http://blog.vpetkov.net/2011/03/11/google-nexus-s-update-manually-to-2-3-1-2-3-2-and-2-3-3/ article.

Please note that the update WILL work if you are running “GRI40” (build number in Settings -> About Phone)
The biggest update when it comes to new features seems to be that Google Talk has voice chat! There are a lot of bug fixes.
For the bug fixes, check out: http://www.google.com/support/forum/p/Google+Mobile/thread?tid=3812c1acf93b482f

 

IF YOU GET AN ERROR:

Please just grab the FULL 2.3.3 (f182cf141e6a.signed-soju-ota-102588.f182cf14.zip) system, install it and then try again. It will work!

 

IF YOU JUST WANT THE RADIO UPDATE:

XXKD1-GRJ22-radio-nexuss-unsigned.zip (md5: 57659f04148ebfa849ef523544f2a3dd)

Note: I personally couldn’t update just the radio update from 2.3.3 (with GRI40) – kept getting the status 7 signature verification, so I used the 2.3.4 update to get the radio patches. I’ve seen people who have been able to apply the radio update to 2.3.3 without any problems.

 

NOTE: Look at the new post above if your phone is *at* 2.3.3 and you want to go up to 2.3.4

If you just want the LATEST update: grab the FULL 2.3.3 image (f182cf141e6a.signed-soju-ota-102588.f182cf14.zip)

I decided to contribute back, mention a few vital steps, and provide a few important files now that I solved this — in order for someone to go from 2.3(.0) to 2.3.3
This assumes that you have not rooted your phone. If you have, you need to un-root it and go back to either 2.3.0, 2.3.1, or 2.3.2,

First of all, if you use the built-in “update” method, the updates need to be consecutive. For this, they are very small.

Let’s assume you just bought your Google Nexus S. It came with 2.3 (or 2.3.0 in reality). The first step is to apply the 2.3.1 update. I’ve called this:

update1.zip (md5: a35798d84104c7cb1d26d7946ce843fc)

The general instructions are:

0.) Put the file into the /sdcard directory.
1.) Turn off your phone
2.) Hold Power and Volume-Up until you see the recovery menu (lots of colors and 4 options).
3.) Use the Volume-Down key to scroll down and  select “Recovery” by pushing the Power key.
4.) Wait for the triangle with the exclamation point. Push the Power key and while holding it, tap the Volume-Up key.
5.) Now you can use the  the Volume keys to go to “apply update from /sdcard” and then the Power key to select it.
6.) Select the appropriate ZIP file, and then use the Power key to apply it.
7.) When everything is done, go to the Reboot option with the Volume keys and then use the Power Key to select it.

Now, that said, after you apply the first update, you go from 2.3.0 to 2.3.1. Now, apply the 2.3.2 update. I’ve called this:

update2.zip (md5: 714e1e1126f1a222c10ffce6c83dc6ad)

Same as before. After you go through the steps and reboot, you will be at 2.3.2. Here is where things get interesting. It seems that you need another update. Its for people who get the “Status 7” error.
This is mostly due to a firmware (those who have: GRH78C or GRH78). Here you will need to apply the LAST UPDATE, the same way you applied update1 and update2:

For GRH78C (md5: 3923f98754f756a83b3ecc44e42a2902)

or

Only for GRH78 (md5: 919d7f2c9e06bb03a2ff74081028bf0a)

At last, reboot, and you are on 2.3.3

Please note that *ALL* of these files have been taken from google and are official. For that exact reason, I have provided the md5 checksums, so that you can verify them before you use them.
Hope this helps.

 

ADDITIONAL INFORMATION AND FILES (If above did not work — very rare):

Some people (very very rare) might still get an error. This is if you have a different radio version. Check: “Settings -> About Phone -> Baseband Version”. You should have either something that ends in “XXKB1” or something that ends in “XXKB3”. Here are the two radios. Apply this the same way as the items above. You might need this BEFORE the GRH78C (or  GRH78) updates.

XXKB1-GRI40-radio-nexuss-unsigned.zip (md5: 4805c255f10eef8b1bd54aa2d27bc30e)

or

XXKB3-GRI54-radio-nexuss-unsigned.zip (md5: 4e9c9cf4d6470be800e00f8508b9c175)

 

LAST RESORT (if nothing above worked — extremely rare):

If nothing worked, try the FULL 2.3.3 OS.

f182cf141e6a.signed-soju-ota-102588.f182cf14.zip (md5: 3e8908941043951da5a34bb2043dd1a0)

Let me preface this with the fact that, you just can’t do this unless your iphone is jailbroken. For the non-curious, stop reading here.

This one came about as I was recently forced at work to switch from using the Unix email system to the hosted Exchange solution, in order for our calendars to be centrally accessible by everyone. Details aside, after adding my exchange to my iPhone (since I am trying to keep my blackberry off BES), I realized that the color schemes absolutely suck. From somewhere, it decided that purple was the best color, and I couldn’t change it. After aimlessly searching through the Calendar.app on the iPhone for a color changing option, I came to the realization that there was no way to do it. Luckily, my iphone was jailbroken, and there are plenty of ways to do this with a little background work. I found this amazing article: http://chriscarey.com/wordpress/2009/02/10/how-to-modify-iphone-calendar-colors-with-sqlite3/

To summarize it, in case the article disappears:

Start by ssh-ing into your phone

One tip that I can give, if you don’t have sqlite3 on your iphone (which you wouldn’t by default), is to scp the file to your computer, apply the changes, and scp it back to the iPhone.

Here are the RGB Values for the Standard Colors:

Red = (181,0,13)

Orange = (229,98,0)

Green = (47,141,0)

Blue = (15,77,140)

Purple = (103,10,108)

So, with the line:

I was able to make my default calendar (the Exchange one) RED — which portrayes the “important” notion and it’s easily visible.

Hope this helps everyone who is trying to accomplish this. Don’t forget to close and re-start your Calendar.app. If you don’t have a jailbroken iPhone, you can change your non-exchange calendars by syncing them to the iCal app, and changing the color back, and syncing them.

[updated: Jan 2nd, 2013 | Updated ‘tiny.pl’ to skip “mailto:” references … it polluted emails with replies]
[updated: Nov 28th, 2012 | Updated ‘tiny.pl’ to be much more efficient which resulted in a crazy speedup]

Yesterday I received one too many emails with a long URL that I actually needed to click on. Why is this a problem you wonder? — I use Mutt. Yes, I’ve heard it all, and yes, I don’t think it’s the best email client, but when you spend all day in a terminal, it’s simply a pain launching the browser to send a single email. It’s a “ctrl+a, #, m, type…type, y…sent”, vs “open browser, go to url, log-in, compose, type…type, hit send”. Anyway, given that I am using the Apple Terminal.app which for some reason has not been upgraded in the last 8 years to include hot linking URLs everywhere (correction: It does, but it does not always handle right clicking multi-line links which contain “strange characters and symbols”), I have to suffer. I’ve been toying with the idea of parsing my mutt emails for a while now, and yesterday I finally decided to sit down and write something. My starting point was my .mailcap entry

My first thought was, why not hook a custom perl script to parse the text from the email, extract the urls, and shorten them? After a little bit of work, I realized that I care about the rest of the text too and not just the URLs. The final solution can be found here:

http://perl.vpetkov.net/tiny.pl [updated: Jan 2nd, 2013]

In order to use it, you need to put this some where (.mutt is a good place), and then modify your .mailcap:
The script starts by NOT reinventing the wheel, and utilizing lynx to parse out the HTML. Please note that this is done only for the ‘text/plain’ type. The way the same script is overloaded is by supplying a second argument for non-html based emails. As you will see, I actually use elinks to parse for html instead of lynx, and the reason for that is because lynx introduces a new character on long URLs for some reason, when used with –dump. This created problems with shortening the URLs. Then the script splits the resulting output into lines, and then it splits each line into “words”. Each “word” is checked for being an URL. If it is, and it’s less than the “trigger” number of characters, it’s shortened and printed, along with the original (nice to keep track). Otherwise, the word is just printed and the process repeats until the end.
While this is EXTREMELY simple concept wise, it is very useful. Is there a down side? — Yes — some potentially private URLs are now public. Solution? — Yes — sign up for bit.ly Pro (free) and use your own domain name. At last, I just want to tack on that while searching for an existing solution to this, I did find a program called “urlview”. I haven’t tried it, but it seems like a much better solution. Here’s some more information on it: http://linuxcommand.org/man_pages/urlview1.html
UPDATE: As it turns out, Terminal.app actually picks up some/most/(all?) long urls. I think it was ‘lynx’ that was wrapping the line at 65-72 characters, which ended up being the cause for the ‘+’ in front of long URLs and the break onto two lines. Anyway, so basically, this means that if you don’t use ‘lynx’ for the HTML parsing, you can potentially click on the links. Either way, I still prefer having tinyurls. Also, I did find a bug in the ‘t’ (non-html emails) version of the script, where in some cases, it will rip out the URL but now show it at all (original or shortened). I noticed this 2 times (out of 1000+ emails), but I just haven’t had time to look into it. I have a feeling that it’s not really my script. The email comes in not containing any html other than an html a href tag. I think that messes with the detection.

For anyone who has not been following what is going on with WikiLeaks, here is a good place to start:

http://www.guardian.co.uk/media/2010/dec/03/wikileaks-us-censorship-row

https://www.eff.org/deeplinks/2010/12/amazon-and-wikileaks-first-amendment-only-strong

WikiLeaks is a “whistle blowing” website. A quick search about it brings you to:

Wikileaks was a website that published anonymous submissions and leaks of sensitive governmental, corporate, organizational, or religious documents, while attempting to preserve the anonymity and untraceability of its contributors.

This week WikiLeaks released some sensitive US documents:

The classified diplomatic cables released by online whistleblower WikiLeaks and reported on by news organizations in the United States and Europe provided often unflattering assessments of foreign leaders, including those of Germany and Italy.

The cables also contained revelations about long-simmering nuclear trouble spots, detailing U.S., Israeli and Arab fears of Iran’s growing nuclear program; U.S. concerns about Pakistan’s atomic arsenal; and U.S. discussions about a united Korean peninsula as a long-term solution to North Korean aggression.

There are also U.S. memos encouraging U.S. diplomats at the United Nations to collect detailed data about the UN secretary-general, his team and foreign diplomats ― going beyond what is considered the normal run of information-gathering expected in diplomatic circles.

None of the revelations is particularly explosive, but their publication could prove problematic for the officials concerned.

The short version of what happened is that WikiLeaks was the target of many DDoS attacks. Eventually, the website was shut down. They decided to change their hosting provider and use Amazon’s AWS (Public Cloud Service). After a few days, Amazon shut down their website claiming that it violated their terms of service. They brought the site in another location, and then their DNS provider decided to shut them down.

The reality is that WikiLeaks is exercising their right of freedom of speech. The problem is that they have some very sensitive information, and this makes political high profile figures nervous. However, when you move past the details of what happened, you come to the realization and real concern — Public Cloud Censorship.

This is the perfect example of why companies are afraid of using Public Clouds (outsourcing your infrastructure to someone else). As you can see from this example, your entire business can be shut down in a matter of minutes, just because someone has a different opinion than yours. This brings massive concern and rightfully so. I really think that the long term solution is private clouds. Take this great technology and deploy it within your own datacenter. When you look at this from the top, it looks a lot like web hosting — you can either outsource your web hosting to a company like DreamHost and BlueHost, or you can do it yourself. There are benefits to both, but at the end, it comes down to your concern for privacy and freedom.

Along with many other people, I personally think that Amazon had the chance to do something great, and as the Guardian and EFF pointed out: “Instead, Amazon ran away with its tail between its legs.”

If you just want the good stuff (configs, how-to’s about this), check out:

http://blog.vpetkov.net/documentation/network-services/smtps-and-imaps/how-to-tunnel-smtp-postfix-server-to-google-gmailgoogle-apps/

If you want to read my full story behind why I even went this route, please continue bellow:

Recently I started looking at getting rid of as much physical infrastructure as possible. My reasons, among it being a pain to maintain, everything surrounding having your own infrastructure is a downfall. Let’s face it — you can’t afford what is really needed to have 99.99%-100% uptime. There are tricks that you can use to join multiple sites, but again, when you really get into it, it costs money and it takes time. Other than having an ESX server as a personal “lab”, I’ve realized that I spend just as much time dealing with physical infrastructure, as I do creating services, hosting stuff, automating things, and programming. This is just wrong! Also, hosting your own infrastructure means dealing with power, bandwidth, static IPs, etc… Anyway, so with that in mind, I started looking at getting rid of my biggest service which had the fewest users — Email.

I hosted a Zimbra server (which I absolutely love) for almost a year, and before that I hosted for 7+ years (and still do at different locations) mail servers running Postfix+Dovecot+SpamAssassin with Some webmail client (Squirrelmail or RoundCube). The problem with hosting your own email server (i’ll use Postfix synonymous with email server) is that everything is a hassle and a half. At the end of the day, if you have one Postfix server, this is fine. If you have 50+ Postfix servers, not so much. And yes, you can ease it by using puppet and common config management like svn+rsync, but it’s still a hassle. The other problem is that common needs like push email, exchange, blackberry BES, calendars, notes, and others simply do not exist as a “one in all” solution that attaches to Postfix. I realized that while being extremely efficient, and while procmail being simply priceless, it is not economical at the end of the day. Users want ease of use, convenience, pretty UIs, and no spam without any effort on their behalf.

This led me into looking at Google Apps (I’ll use Gmail synonymous). It seemed like the perfect solution — off site, fully managed, relatively cheap (or free), common UI which almost everyone is familiar with, and virtually no spam. It provided smtp(s), imap(s), pop(s), and other common services. The few problems that should be brought up front are: privacy, security, space, and limitations. With “GMail”, (free google app), you are limited to 7-7.5GB per user, 25 users, and “some” advanced SMTP features. You can always pay $50/year/account in order to 25GB with unlimited users and some more programmable/API features. The thing that really attracted me was the ability to get an “all-in-one” solution that was extremely easy to deploy for multiple users. The reality is that most users just want their own email at their own domain, with some storage, some web UI, and no spam or viruses. This was something that I was doing with my “Postfix setup”, and I had scripted quite well infact, but with Google Apps, it was a matter of 15 minutes per account.

Now, the main two problems were: how do my users who use mutt (myself being one of them) get to their email, and how to existing services AND “dumb services” (storage devices, vCenter, etc…) communicate to the “Gmail” servers. The first — mutt — turned out to be much easier than I thought. If you are already using mutt with any authenticated IMAP/SMTP server, you have probably already stumbled onto: msmtp. With a little more work, and you can get this piece of software to work perfectly with Gmail. If you need some help, check out: http://blog.vpetkov.net/documentation/network-services/smtps-and-imaps/mutt-with-google-gmailgoogle-apps-or-any-imap-server/ The second problem turned out to be relatively easy, after doing some research and a bit of trial and error. The main idea is that you create a simple “relay” server in a way. A lightweight Postfix installation which only auths and forwards/relays all the emails to Gmail/Google Apps/any IMAP server for that matter. I went the extra step and configured it to be able to use different SMTP servers with different auth based on different user/email accounts. You can get all the technically details at the top of this post. Good luck, and I hope this saves you some time.

There have been many interesting things happening in technology lately, but I’ve been really busy lately, and I just haven’t had time to post interesting articles. That said, there was an article about ATT and the iPhone that really caught my attention. The article started with:

“As the carrier with the highest number of dropped calls, lowest customer satisfaction rating, and smallest 3G coverage area, AT&Ts lifeblood over the last few years has been its iPhone exclusivity.”

This is the first thing that caught my attention. Everyone praises how reliable ATT is. They say that the dropped calls are really minimum and that the 3G coverage is very large. Finally, they say that customers are perfectly satisfied. From my opinion, first of all, I’ve never ever had as many dropped calls on all the carriers combined, as I’ve had with ATT. Second of all, the customer service is terrible. Now that said, I had the business customer service, from which only 30% of the people are incompetent. The last thing is about the 3G — I personally do believe that they have a “relatively large” 3G coverage, but the 3G coverage is extremely poor in quality, very unreliable, and 5bars could mean a 2MB/s download or a 200KB/s download.

The next part in the article said:

“AT&T CEO Randall Stephenson spoke about the issue at an investor conference in New York, saying it’s unlikely the customer base will drop AT&T just because the iPhone goes to another carrier. He said that 2/3 of all iPhone owners were previous AT&T customers. So somehow this Stephenson guy thinks 1/3 is a small number, and if 1/3 of all iPhone owners dropped AT&T it wouldn’t be a problem. Umm, most people would disagree with that.”

Are you crazy? First of all, you think losing 1/3 of your customers is OK? This should tell you once again how much ATT cares about their customers. Second of all — I think ATT will lose a lot more than 1/3 of their customers. What Randall is assuming is that the other 2/3 will stay because they are “happy”. The main problem here is that nothing better existed at the time. This has drastically changed. The reality is that 2/3 or more of the people would’ve already left, if it wasn’t for the iPhone.

“Now, of course, no one is expecting that the moment a Verizon iPhone arrives, there will be a mass exodus of AT&T customers.”

From Verizon alone? – no. From Verizon, T-Mobile, and others — Yes. The point is, when there are alternatives, especially cheaper ones (T-Mobile), people will gladly make the switch.

And at last, my favorite part:

“By all metrics it is the worst of the four major carriers in the US. And Stephenson just doesn’t get it. Of the millions of people who now have an iPhone in the US, 33% of them were not AT&T customers before. That’s a big number.”

What’s interesting about that is that it’s 33% of one million! Yes, ATT just said it’s OK to lose 330,000 customers. The second part, and my personal favorite because I’ve been saying this for a long time — ATT is the worst carrier by all metrics!

All this said, something you should know about me: I’ve used all 4 major carriers in the US, at least twice each. I’ve also owned 3 iPhones (1 on Tmobile), 3 blackberries, 3 treos, >5 other smart phones, and a few other regular phones. I personally HATE ATT. And yes, I own an iPad too.

If you want to read the article, you can find it at:

http://www.tgdaily.com/mobility-brief/51659-att-not-worried-about-loss-of-iphone-exclusivity

I have a Twitter Dilema, and I am very curious what people think. Here’s the problem:

If you make your tweets private (which is what I have done right now), you are not forcefully followed by spammers, BUT when you add friends, if they don’t add you back, they will not see your replies.

If you make your tweets public, you are force to deal with the 13 year olds which are trying to get 50,000 followers and 2 million tweets.

I personally think that this is a bug with twitter. If you have protected tweets, and think that someone is ‘safe enough’ to follow, twitter should automatically allow that individual to see your tweets, even though they are protected. This only makes sense. Heck, enable an option to toggle this.

What does everyone else think?

Hey There, Welcome! I finally brought up a new website. It is far from complete, but a little by little, it will get there. Recreating all of the documentation will take a long time, so please be patient and check back often. My website needed a redesign for a very long time, and I kept putting it off since there was never enough time. I thought long and hard about how this website should look and feel in order to be simple/minimalistic, and clean, while offering  very rich and detailed information — mostly in the form of documentation and “what’s new or on my mind” articles.

What started this was my realization that it was time to migrate everything away from PmWiki. While PmWiki was a great replacement for my original static site, I slowly outgrew it. I started using it (no, I actually used about 7 other wikis first until I stumbled onto PmWiki) because I wanted a quick way to add documentation while I wasn’t near a terminal. After wikis got popular and the spammers started hitting them, I quickly password protected it. Then, a little by little, I kept adding more plugg-ins/mods, themes, and custom code. A little by little, I realized that other than the dynamic text entry, I had re-written or customized almost everything. It got to a point where I spent more time maintaining the wiki around upgrades than the actual documents and articles.

Due to this, I started a blog — using WordPress. My initial impression was that WordPress was very heavy and bogged down, and very ugly. I did not like my initial experience. I switched to another blog suite — textblog. After a few months I realized that I needed more functionality, so I deployed a simple php blog. After a few more weeks I decided to give WordPress another chance, since I had just read an article that they were going to release a new “ajax” management interface. This is what hooked me onto WordPress. However, as time went on, I realized that maintaing PmWiki *and* WordPress was almost a full time job. I spent endless nights trying to customize the code on each one in order to make them fit a common theme. I finally gave up and decided to just shut down my website. After a few months, I came to the conclusion that the documentation and articles I had were not only useful to others, but to myself too, and I actually missed having them up to date. This brought on a new goal: use a documentation source and a dynamic article software under one common system. I looked at WordPress’ ‘Pages’, and liked them for the most part. While not amazing, they suffice. At last, it was decided: I was going to use WordPress to replace my Wiki and Blog.

Before I started head on, I looked at some content management systems (CMS) like Joomla, and Drupal. I had actually used Drupal at a previous job, and I hated it, and Joomla simply reminded me too much of Drupal. I looked at a few other ones, but the story was the same. The reality is that the documentation pages are static for the most part. They get written once, and stay the same for the most part, with small changes here and there. CMS’ on the other hand are more like portal drop-ins. This is also why they require a lot more work. I had already been on the side of maintaining things, and I just wanted something that “worked”.

Here we are now, with WordPress as the documentation system (via Pages), and the dynamic article system (via the blog engine). I did have to spend a good 4-5 hours getting everything configured and customized, but with the exception of a small piece of code, all of my customizations will not be impacted all all by upgrades. This is it. I will keep this theme, look, and feel for a very long time. My main goal is provide lots of documentation in a few categories: Network Services, Smart Phones, Security, Programming, and at last, Operating Systems. Each of those has many sub-categories, but you can find more from the Pages. I will also provide any and all files/scripts/programs that I either come accross, or create.

At last, everything is free for grabs. You may take, modify, and/or share anything from this website — of course, at your own risk. I would prefer if you give credit and put a link to my site, but you are not required to. Thanks, and I hope you find all of the information here useful.