Archive for October, 2009

100,000 Followers! Thank you!

100000Almost three years ago, I first heard of Twitter. I didn’t get it and quickly dismissed friends that pushed it on me. A waste of time. Idiots! All of them.

Fast forward two years and I finally signed up. Too many people sent the “I want to follow you, join Twitter” email so I just went with it, plus the previous years of subliminal brainwashing got me thinking this might be more than just a flash in the pan. A week later I was addicted and spent all my time learning the ins and outs; the bots, spam, DMs, follow limits, action limits, follow ratios, automation tools, the API, etc.

Twitter is much more than a network, it’s a protocol. A new way of communicating. A guilt-free way of communicating – not as synchronous as chat, but not as asynchronous as email – you can reply when you want to and you don’t have an inbox piling up. For me, that is perfect.

Follow me on Twitter!

image flickr/Pierre Beteille

64-Bit Firefox Builds Are Here

64bitUpdate: Patch 513747 has been applied.

The real hero of this story is Josh Aas. He got the 64-bit goodness into the codebase a couple weeks ago and committed it about a week later. I haven’t done any benchmarks on this so please check out Josh’s site – Boom Swagger Boom – and more specificly the benchmarks he’s put together. This isn’t a huge jump in performance, but I wasn’t really expecting one anyway. Since this version is using Gecko 1.9.3, there should already be a performance gain and the 64-bit goodness just adds to that.

The other interesting thing I’ve stumbled into is how to incorporate GCD into Firefox. I’m ready to start on that project, but have been bogged down with real world things like work. I’ll try to slip it in over this weekend or next, but no promises. There is an easy route and a hard route for GCD, I’m going to stick to the easy for now. As for OpenCL, that will be hard, very hard. I’ll look into that once my GCD builds are complete.

To download 64-bit Minfield, go to the Downloads page.

Setting Up Django With Mod_WSGI On Snow Leopard


Updated below (08/26/2014).

As you’ve probably noticed, I like to compile things. Package management, although useful, just really doesn’t do it for me. Recently, I had to set up a customized install of PHP on CentOS and was forced to use RPM. Well, the experience was funky to say the least. The basic php install had all compile-ins disabled. I had to root around and find the specific package, such as GD, then RPM it into PHP via an ini directive in /etc/php.d/. This sort of drove me nuts.

Anyway, onto Django. Snow Leopard comes with Python 2.6.1 and Apache 2.2.11. Of course I compiled my own version of Apache, resulting in a 2.2.13 version in /usr/local. I left Python alone. Installing Django was a snap and once again I had to go the bleeding edge route and use the Django trunk as my install, symlinking it into /Library/Python/2.6/site-packages/. This worked just great. Now I needed Apache to serve up my Django framework. I had the choice of mod_python or mod_wsgi and seeing the benefits of mod_wsgi, I went with that (also I’ve had horrible experiences with mod_python in the past).

I downloaded mod_wsgi, and did the following:

# tar -xzvf mod_wsgi-2.6.tar.gz
# cd mod_wsgi-2.6
# ./configure --with-apxs=/usr/local/apache2/bin/apxs
# make
# make install

And it blew up with the following error:

Warning! dlname not found in /usr/local/apache2/modules/
Assuming installing a .so rather than a libtool archive.
chmod 755 /usr/local/apache2/modules/
chmod: cannot access `/usr/local/apache2/modules/’: No such file or directory
apxs:Error: Command failed with rc=65536
make: *** [install] Error 1

Great, mod_python nightmares started returning. I did some research and found many people were having problems with mod_wsgi on Snow Leopard. Graham Dumpleton, the maintainer has been very active in helping people out. His main suggesting was to modify the Makefile to have it point to the correct version of Python by modifying the LDFLAGS and LDLIBS. None of these suggestions worked for me, and even Graham seemed baffled as to why it would work on some Snow Leopard systems and not on others. One of his points was that people with a MacPorts build of Python were corrupting the mod_wsgi process as this build is a bit shaky. Without specifying your version of Python during ./configure, the MacPorts libraries would supersede the default install or other compiled versions. I checked an indeed I had a MacPorts Python in /opt/local/. How did that get there? I hate package managers.

But this wasn’t the problem as I could see the correct version of Python in the Makefile. Most of the people having problems had a customized Python build, but I was on the other side with a customized Apache build. So I looked into the difference between my APXS and the default APXS. The only major difference was on line 199:

($httpd = $0) =~ s:support/apxs$:httpd:;

My version did not have that httpd in there and looked like:

($httpd = $0) =~ s:support/apxs$::;

Making this change solved the problem. I’m not sure what I did to get a funky APXS, so if anyone knows I’d love to hear it.

I had some other issues with permissions, paths, and eggs, but they were fairly simple and if you follow the documentation, you can get everything working. I don’t always RTFM.

Hopefully this saves someone else the time and misery of setting up Django using mod_wsgi under Snow Leopard.


So here I am years later, having completely abandoned apache for nginx, and somehow get stuck with getting graphite up and running. After the fun of py2cairo, I just wanted to keep to the script as much as possible so I went with apache httpd. I ran into my good friend, the apxs:Error: Command failed with rc=65536 error above.

I’ve seen this before. I google it and find this page but the missing httpd doesn’t do anything. Uh oh. Looks like another person or two has had the same problem so I follow their advice “rebuild apache without –enable-module or –enable-shared”, and bam, it works.

If I have time to dive into the details of why, I’ll update this post again. Here is the helpful thread (that references this page).

My Twitter Account Hacked

twitter_spike“Last night I received a Growl notification from Tweetie of an RT on a spam message I sent.”

My wife said “I don’t think I understood a word you said.” Well, that’s what happened and it didn’t dawn on me at first. With the number of followers I have, I get a constant stream of mentions, mostly spam. It finally sunk in that this person was attributing the message to me. First off, I don’t understand why someone would RT such a message, but I’m glad they did. Second, who, wha, huh? Why am I spamming people.

Try logging into my account. Fail. Try retrieving my password. Fail.

Sent a message to Twitter support around 1 AM, then started looking for ways to hack my account back. Using a separate account with API access, I started running through possible traps, DOS attacks, etc. About this time, I got the auto response from Twitter asking if I’m sure I didn’t just forget my password, so I respond to that and check my account again. It’s gone. Good. Better dead than spamming.

At about 6 AM, my account was reverted to my email address and the password was changed (I received no notification of this). I got in, secured the account and all is good again.

Why am I blathering about all this? OAuth.

Twitter has preventative measures against brute force attacks. If you fail logging in a certain amount of times, your account is locked for one hour. Not sure if this is IP based (which would be stupid) or just a general lockdown. There is no way somebody could have brute forced my password.

So how did it happen? Well, I signed up for a twitter stats service from a company I’ve heard positive things of in the past. They weren’t using OAuth so I exposed my credentials. In the early days of the Twitter ecosystem, this would be perfectly acceptable, but now after OAuth has been fully implemented and the exploits/bugs worked out, there is no excuse not to use it. If the developer is too lazy to implement OAuth, the service probably isn’t worth it. The company can be totally reputable (which is why I will not mention them), but if the passwords are stored, someone is gonna get tempted.

My personal feeling is that this ecosystem be FORCED to use OAuth. I even mentioned something of that nature on the dev-list. But the ecosystem is what matters and if a majority aren’t using OAuth or won’t upgrade, they’ll get what they want. Developers, Developers, Developers!

Personally, I’ll no longer use any service that isn’t using OAuth, and you should probably consider doing the same. Of course, there are other possible ways people could have compromised my account, but the timing just says otherwise.

Oh yeah, follow me on twitter!

image / Andrei

Why Apple Bought Placebase

placebaseUpdate: And we have an answer – Google Maps Ditches Tele Atlas in Favor of Street View Cars and Crowdsourcing

I have no idea. But the title is catchy, and I’m hoping to learn the answer by a) writing out my thoughts and b) you. The transaction went down in July but was only uncovered recently, causing a day long brouhaha on the blog circuit, and now it’s been forgotten. As someone with a little knowledge in this area, I’m a tad more than intrigued. I’m just going to dive into a couple theories. They all focus on why Apple would move away from Google (I’m not even sure that is their intent):

  1. Current Data Limitations
    Google places restrictions on their data use. A clear example of this is Jobs’ “BYOM”, Bring Your Own Map, statement about turn-by-turn directions. Google will not allow their data to be used for such functionality.
  2. Cost Reduction
    Tiles aren’t free. Even to important and huge clients. Google went from a “per transaction” payment model to a “per tile” payment model and the entreprise level pricing isn’t exactly cheap. With the amount of Apple’s usage and even at a substantial discount, this cost is still probably in the tens to hundreds of millions per year.
  3. More Control
    Apple is a control freak. No need to argue this. Being reliant on an increasingly competing company can’t feel good for anyone, especially Apple. The threat of Google barring Apple from their maps or even significantly altering the usage deal has DOJ written all over. This isn’t going to happen and Apple isn’t worried. Google does have free reign to make their maps look like crap if it affects all customers which leads into the next point…
  4. Google Adding Data
    Just recently Google started adding advertisements PLUS user-generated content into iPhone apps. Wait, I thought one of the benefits of the enterprise license was to have the ability to remove ads. And wait, does this only affect iPhone apps? I think Google will come to their senses and flip off that UGC, unless you want to see it. The ad part is a bit odd and doesn’t jive with point 2 and 3 above.
  5. More Control, Redux
    Google doesn’t own the tiles, they are licensed. Tele Atlas provides the map information, and other services provide the satellite imagery (Digital Globe, GeoEye, USGS, TerraMetrics, and the list goes on). There are some interesting things to note here.
    Tele Atlas provides data to many vendors including Yahoo and Microsoft and probably Placebase. Tele Atlas provides the map information, not the tiles, so each vendor can make them look however they want through Tele Atlas’ proprietary API. The United States is a 5GB or so text file. Apple’s designers could make their maps look better than everything else on the market, without a doubt.
    The satellite data comes mainly from Digital Globe, but once you start zooming in, watch the copyright info on the lower right of the map – the data comes from many, many sources. Apple can easily strike deals with these same sources. Digital Globe is happy to point out their non-exclusivity with Google. But on the other hand Google seems to be in bed with GeoEye, launching a satellite together and all.
    Other data such as traffic and street view is proprietary to Google.
  6. Data Layers
    Placebase offers many layers of statistical data for their paid API and have won awards for their PolicyMap website. But if you look close enough, this is just tract data freely available from the US Census Bureau. Gathering and overlaying this data is a trivial task. In one of my previous companies, we built a much, much richer data set covering many more areas than Placebase offers. This was a hurculean task, but as a small startup, we pulled it off. So I think access to the data layers is moot here as it adds little value.

So what’s this all about then? My guess is that is has to do with Augmented Reality/Extended Mapping capabilites. This could be especially useful on the mythical tablet and could be even more useful for the collection of future Census Data. Not only in the US, but the rest of the world. It’s a little late for the 2010 Census though, so this idea is a tad suspicious.

I welcome all opinions, corrections, and comments on this as I really want to get to the bottom of Why Apple Bought Placebase.