Tag Archives: Google


The Internet is About to Become WAY Faster

Earlier this week, the big news in the tech space surrounded the completion of the HTTP/2 Spec.

Gibberish, gibberish, gibberish.

What does this mean for the internet? The short version is: it’s about to become way faster.

Faster is extremely important if you want to make money from traffic to your site. Or, you know, if you use the internet at all. No one wants to wait for a website that just sits there and spins while it’s trying to load. We all hate that. And importantly, Google hates that.

Several years ago, Google invented and implemented the SPDY protocol. Simply, SPDY allows for the compression and efficient transmission of requests between browser and server. I’ll get into the technical details later, but for the sake of summarizing why the Internet is about to get faster… Google’s SPDY technology is the cornerstone of HTTP/2.

Now that the spec is finalized, it goes to editorial for cleanup and publishing… but the nuts and bolts of the spec will not change going forward. That means browsers and servers can start rolling out these changes as soon as… today.

Technical explanation

If you are not interested in the technical explanation for HTTP/2, please skip to the next section. I don’t want you falling asleep and drooling on your keyboard.

In the original days of the internet, there was the HTTP/1.0 spec. This spec defined how clients (browsers) and servers communicated over a network. We still use 1.0 quite a bit today, though 1.1 is the current preferred.

When it comes to HTTP, the concept is simple… a client requests a resource (image, html, mp3, whatever) from a server, the web server interprets the request, goes back into a storage closet, finds the resource, has the requestor sign for it and then sends the resource on it’s way back to the client. A simple understanding is….

BROWSER: “Hey, server… can I get that image named image.jpg? You should find it in this folder.”

SERVER: “Sure, let me go look. Oh, here it is. There you go”

BROWSER: “Thanks, dude”

If the server can’t find the image in the directory, it sends the client a 404 (Not Found), but that’s just a sidenote.

HTTP/1.0’s Problem

The problem with HTTP/1.0 however, was that it only allowed for a single request on a single connection to the server. To put it in another way, The web page has 10 images on it. In order to get those 10 images from the server, it would have to send 10 requests. That creates a lot of requests for the server and if the server wasn’t optimized for that kind of capacity, it could crash… or at least be in the weeds. It was bad all around for bandwidth as well. All of those requests add up to high bandwidth costs!

So God gave us HTTP/1.1.

HTTP/1.1’s Problem

Given this inherent problem with HTTP/1.0, HTTP/1.1 enhanced the original spec by adding the concept of pipelining. Imagine, if you will, a highway tunnel. Generally, there’s no passing in a tunnel. While HTTP/1.0 only allowed a single car through the tunnel at a time, HTTP/1.1 allowed multiple cars in the tunnel, but dictated no passing. And because the web server on the other end of the tunnel can only process requests as they come in, you end up with a stacked up queue of requests waiting to get processed and… you still have the same problem where the server gets in the weeds and slows down.

In both the case of 1.0 and 1.1, server technology evolved to allow concurrency of requests. This gave us multiple lanes in the tunnel, but 1.1 still dictated no passing. So requests in one lane, had to stay in that lane but there could be more than one lane which allowed us to bandaid the inherent weakness in the HTTP/1.x stack.

Enter HTTP/2

When Google started giving SEO benefits to sites based on speed, they also ate their own dog food and invented SPDY. SPDY allows for compression of resources in a much more efficient way if both server and client supported it. It also allowed for single requests to get many resources at a time. That page that had 10 images and had to make 10 requests for those 10 images could now make a single request to get all 10 images at once. Efficiency, I tell you.

As with any working group, the task force that put together HTTP/2 had representation from Google. Google, as a good citizen, shared it’s knowledge and spec for SPDY with the working group and it became the basis of HTTP/2. In fact, Google will now eliminate SPDY in favor of HTTP/2.

In fact, clients are supporting HTTP/2 now. Well, a lot of them are anyway… and that’s because of Google’s implementation of SPDY. Internet Explorer 11+, Firefox 36+, and Chrome all implement SPDY-HTTP/2 support, but none are currently enabled by default. Safari and Mobile Safari will likely soon get the support.

Most web servers also implement SPDY-HTTP/2 with the exception of lighttpd.

What does it mean to me?

The new HTTP spec is probably not anything you need to worry about at this point. System administrators will want to make sure their web servers are up to date and their TLS certificates are upgraded. Though the working group does not require HTTP/2 to use TLS, I’d expect most server manufacturer’s to require them in their own implementation… for security reasons.

On the client side, HTTP/1.1 still works. The working group was very careful to ensure backwards compatibility with prior versions of the spec. So if your browser makes a 1.1 request to a 2.0 server, the server will still answer in 1.1 with the same limitations I described above.

As developers, we will most likely want to use 2.0 when we can. The finalization of the spec is so new that it remains unclear what that means yet. For instance, what does this mean for the WordPress WP_Http class? Probably nothing in the short terms, but I’d expect enhancements to start rolling in as optional “toys” for developers.

Are you a developer or engineer? What are your thoughts on the new spec?

Aaron Brazell, Rules for Entrepreneurs

Rules for Entrepreneurs: Compete and Collaborate

Photo by Roger Barker on Flickr.

Google and Apple are not only competitors… they are collaborators. Indeed, Apple and Google both offer top level smartphones – The iPhone from Apple and the assortment of Android devices by Google (Google not only has its own phones but is the main proprietor of the Android open source project).

In the same world, Samsung and Apple are rivals (and becoming even more rival-ous) with competing smartphones (Samsung runs Android) sparking ferocious lawsuits back and forth, but Samsung is also a major supplier of parts to Apple.

This segment of my continuing series on Rules of Entrepreneurship is all about knowing when and how to compete and when collaboration is a better option. They are not mutually exclusive. This is a natural segue from my last post where I suggest that entrepreneurs focus on doing one thing well.

Principle: Don’t Reinvent the Wheel

It frustrates me to watch startups (usually not very good ones) try to reinvent the wheel. A classic example of this was from back in 2007 when I was sitting in a Starbucks in Columbia, MD. We had a group of entrepreneurs who gathered there on a daily basis and cowork together.

One of the guys I was working with introduced me to a pair of African-American entrepreneurs and he wanted me to hear about what they were building. I sat down and listened to their pitch. They were building the “YouTube for the African-American community”.

Full stop.

What? Why? Why not use YouTube?

They were well into the process of building an entire video platform from the ground up, complete with their own video encoding technology, instead of leveraging what YouTube (and subsequently Google) already created.

The entrepreneurs real mission was creating a video-sharing community for African-Americans, not creating video technology for African-Americans to use. I told them that day that they should abandon attempts to build their own video service, and instead leverage YouTube (which is built and maintained by really smart people at Google) to build the community they really wanted to build.

Why re-invent the wheel? You distract yourself from your core goals.

Sidenote: I have never heard of or from those entrepreneurs since.


As an entrepreneur, part of the process is identifying your competition. We certainly have done that at WP Engine. Sometimes, it is to your benefit to team up with your competition to achieve a common goal. Remember, business is business and it’s not personal. Don’t let your desire to “win” get in the way of your ability to get ahead.

Also, remember the age-old saying, “A rising tide lifts all ships”. What is good for your competition is often good for the entire industry you’re in. Everyone wins.

Certainly that’s not always the case, but it certainly isn’t not always the case.


In my opinion, competition is a bottom-line issue and there are lots of ways to positively affect your bottom line. Usually, competition does not equate to a zero-sum game, an assumption that rookie entrepreneurs tend to make. (I did this a lot in 2006, 2007 while at b5media and trying to take pot shots at competing blog networks – years later, I find it all kind of silly).

When you do choose to take on direct competition, keep it narrow, precise and for a specific purpose. Don’t allow personal feelings to affect your business strategies and, in the process, keep the door open to cooperation with your competition in other areas.

Next week, I’ll continue this series and talk a bit about release cycles – which is always a fun debate. If you’re not already subscribed to this blog, do so now. Also, follow me on Twitter where I’ll be talking about entrepreneurship, WordPress and a healthy dose of sports on the weekend.

Aaron Brazell

Make the Web, Cloud Do Your Work So You Don’t Have To

Photo by Balleyne

While perusing around the web yesterday (after sifting through my email post-vacation), I came across this Ars Technica article discussing the new Firefox upgrade timeline. It actually follows a similar upgrade timeline that WordPress adopted after WordPress 2.0 was released.

The new policy outlines a 3-4 month window for new major releases with limited security updates for releases outside of the current stable release.

The Ars article goes on to describe the angst that has come out of the corporate community as they have been lulled into a process of having to test new releases of software to ensure compatibility with their internal firewall’d webapps that have, in no small part, been created for a specific browser – usually Internet Explorer 6 or 7.

Browser Stagnation Caused IT Stagnation

A few years ago, the stagnation of browser support was broken as Firefox and Opera started a race to implement CSS3 features that were not necessarily status quo, as a result of Internet Explorer, and were not even blessed as part of an official spec. The browser makers just started doing it.

Notably, some of these browser-specific “add-ons” to CSS dealt with things that had been desired but only usable with browser hacks: rounded corners, opacity, etc.

Apple came on the scene, particularly with iOS (then iPhone OS), and put a tremendous amount of development efforts into WebKit. WebKit is a browser framework like Gecko, the framework that Firefox and the old Netscape are built on was. Apple’s take on WebKit was Safari. Google followed suit with Chrome awhile later, also built on WebKit.

What we end up with is a browser war with higher stakes than the famed Internet Explorer-Netscape war of the 1990s. We also see a lot more innovation and one-upmanship… something that can only be good for consumers.

The Ars article describes a tenuous balance for enterprise customers. That balance is the need to support internal firewalled applications while giving users access to the public web. The money quote from the article sums up the balance nicely:

The Web is a shared medium. It’s used for both private and public sites, and the ability to access these sites is dependent on Web browsers understanding a common set of protocols and file formats (many corporate intranet sites may not in fact be accessible from the Internet itself, but the browsers used to access these sites generally have to live in both worlds).


If developers could be sure that only Internet Explorer 9, Firefox 5, and Chrome 13 were in use on the Internet, they would be able to make substantial savings in development and testing, and would have a wealth of additional features available to use.

But they can’t assume that, and so they have to avoid desirable features or waste time working around their absence. And a major reason—not the only reason, but a substantial one—is corporate users. Corporate users who can’t update their browsers because of some persnickety internal application they have to use, but who then go and use that same browser on the public Internet. By unleashing these obsolete browsers on the world at large, these corporate users make the Web worse for everyone. Web developers have to target the lowest common denominator, and the corporations are making that lowest common denominator that much lower.

As someone who has worked on the web for more than 10 years and who has also worked in Enterprise, I agree.

I remember when I worked for the Navy and the Navy-Marine Corps Intranet (NMCI) was in deployment. It was a massive headache for everyone involved because the assumption with that contract was that systems could uniformly be tied together and standardized. By my understanding, they finally achieved that last year, but not until after being years late and hundreds of millions over budget.

I don’t know the final deployment as my contract with the Navy ended back in 2004. I know that proprietary systems were in place that were designed to a function and not to a standard.  When standards were introduced as necessary requisites for any system in that eco-system, the implications were huge.

This is the world we live in today where, as the Ars article points out, browsers that must live in a world of compatibility and still access the public web drag the rest of us down.

Outsource Your Shit and Focus on Your Core Business

But Ars already makes that point. I’m not making it again except to highlight the validity of their thoughts. My point is more intrinsic to startups, small businesses and entrepreneurs and I make it delicately as it has, in some ways, countered some of my thoughts in the past.

Why should you worry about building applications to a function when you can build them to a standard? Or better yet, why should you build from the ground up to a function when you can use external, cloud-based services built to a standard.

Take Microsoft’s just-announced Microsoft Office 365. Now, I don’t know anything about this product so don’t take my commentary as an endorsement in any way. We use Google Apps at WP Engine (another good example of exactly what I’m saying here).

In Office 365, you have a common piece of line-of-business software (Microsoft Office) available for a subscription and hosted in the cloud. This eliminates IT Administrators requirement for testing on the internal network. It’s on the web! Everyone has the web! And it doesn’t need (and in fact, cannot work) with non-standard browsers. And you don’t even need Microsoft’s browser to use it.

Suddenly, IT Administrators along with Microsoft have saved the Enterprise tens, if not hundreds, of thousands of dollars in man-hours testing and re-resting for OS compatibility. And suddenly, IT Administrators along with Microsoft have taken the chains off users to have freedom of choice in their browsers (which, by the way, is more than a pie in the sky idealistic thing… it’s also a cost-saving efficiency thing). And also suddenly, Microsoft has released the web to be able to thrive and not be retarded by corporate requirements.

This kind of thing makes perfect sense. Why re-invent the wheel? Why put resources into something you don’t have to? Why not let a third party, like Microsoft or Google, worry about the compatibility issues in line-of-business software.

After all, your company isn’t in the core business of building these applications. You are in the line of business of doing something else… building a product, a social network, a mobile app, a hosting company, etc. Your software should not define the cost of doing business. Your people and your product should.

Aaron Brazell, Featured, Hall of Fame

I’m Pro Choice. I’m Android.

We in the tech world are a fickle bunch. On one side of our brain, we scream about openness and freedoms. We verbally disparage anyone who would dare mess with our precious Internet freedoms. Many of us, especially in my WordPress community, swear allegiance to licensing that ensures data and code exchanges on open standards.

Yet one thing stands out to me as an anomaly on this, the opening day of pre-orders for the iPhone 4.

Photo by laihiu on Flickr

Ah yes. The iPhone. The gadget that makes grown men quake in their shoes. The thing that causes adults to behave as if they left their brains at the door. At one point in time, I called this behavior “an applegasm” and identified the Apple store as the place where intelligent people go to die.

And it’s not only the iPhone. It’s the iPad too (I bought one 3 weeks after release and only because I needed it for some client work). In fact, it’s any Apple device. Apple has a way of turning people into automatons controlled by the Borg in Cupertino.

Don’t get me wrong. I love Apple and I love Apple products. However, there is a degree of hypocrisy (or shall we call it “situational morality”) that comes into play here. There is nothing “open” about Apple products. Sure, Steve Jobs famously points out that Apple encourages the use of open web standards like HTML5, CSS3 and Javascript, but the devices are nowhere near open.

In fact, the devices are so closed and guarded that strange things like lost stolen iPhone prototypes make huge news. There is only one device. There is only one operating system. There is only one permitted way of designing apps. There is only one carrier (in the United States).

And the open standards, web-free, maniacal tech world that is ready to take off the heads of closed entities like Microsoft, Facebook and Palm, whistle silently and look the other way when it comes to Apple.

In another few weeks, I am going to be eligible for an upgrade with Verizon Wireless. As a longtime BlackBerry user (I refuse to give money to AT&T ever), I will be investing in a new Android-based phone. I won’t be doing this with any kind of religious conviction about open source. There is a legitimate place for closed source in this world. I’m doing this because the culture of openness (which supersedes the execution of openness, in my mind), allows for more innovation and creativity.

In the Android world (which is quickly catching up to the iPhone world), apps are being created without the artificial restrictions placed by a single gatekeeper. There are more choices in phones. Don’t like this one? Try that one. There is a greater anticipation around what can be done.

Apple had to have its arm twisted to enable multitasking in it’s latest operating system. It had to have its arm twisted to allow cut and paste. It still hasn’t provided a decent camera, despite consumers begging for one. In the Android world, if Motorola doesn’t provide it, maybe HTC does. You have choice. Choice is good.

I’m pro choice.

Aaron Brazell

Doers & Talkers: Cultivating Innovation

A few years ago, I wrote a post called Doers and Talkers where I profiled two types of people in the technology space: Those who have ideas and are visionaries (or talkers) and those who implement those ideas on behalf of others (the doers).

I looked back at that post and realized that, while correct, it was a bit simplistic. In fact, in a world filled with shades of grey, there are more than just doers and talkers.

In review, talkers tend to be the ideas people. They have great ideas, whether in technology, business or just life in general. They see big pictures and tend to have lofty goals. They think quick and often take steps to see their visions implemented, often times without thinking about ramifications and potential pitfalls.

Talkers benefit from irrational thinking. They look at the impossible and, in their own minds, they don’t think it’s impossible. They see limitations as challenges and tend to think that road blocks are only minor inconveniences.

via XKCD

via xkcd

via xkcd

These are the CEOs and founders of the world. These are the people like Steve Jobs of Apple who say, “Phones shouldn’t be this limiting. I should be able to use my natural senses and behaviors to make the phone do what I expect it to do.” Thus, the iPhone was invented with a touch screen interface and technologies like the accelerometer that allow manipulation of the device through natural movement.

Doers, on the other hand, tend to not allow creative thinking. In fact, they tend not to be creative people. They are analytical, engineering types that look at data and extrapolate results based on that data. Doers, in the software world, are the engineers who are handed a list of specs, a timeline and budget, and are told to go and execute.

These people thrive on structure and expectations. They like to know what’s expected and, when they know, are exceptional at delivering results. Doers abhor irrational behavior and approach problems from a perspective of frameworks and architecture. They don’t venture outside their tent posts and, by doing so, are the necessary ingredient for Talkers to see their visions executed.

There really are shades in the middle, however, that are a rare breed. It’s the people in the middle, who both have the business savvy to see big pictures and allow for some degree of dreaming, yet have a firm understanding of expectations and roadmaps that make them so valuable.

See, doers rarely engage with the talkers in providing context or realistic expectation for proposals. Doers don’t really want that role. Doers get into trouble because they don’t know how to speak the language of the talkers. They don’t have the confidence, perhaps, or the desire to take a project and drive a sense of reality into a proposal. That’s above their pay grade, in their minds.

Meanwhile, talkers have an inherent nature, generally, that precludes outside input in decisions. Therefore, they don’t ask, or perhaps even think to ask, the doers for input. They create the business plans and monetization strategies, but rarely think about the implementation. By doing so, they often overlook problems that might be incurred. Talkers are usually distant from the details of the project and so, they tend to miss the detailed tactical decision making process that is employed by the doer.

Finding that personality who has the business understanding to see a 50,000 foot view, interface with management to guide a decisions in a productive manner and who also has the background and understanding to talk to the doers and collect their input is a rare, but important breed. These people should be hired immediately. Create a position if necessary but don’t let them escape.

These types of personalities tend to be excellent product managers and, in a technical environment, can really steer a product in a productive direction.

For what it’s worth, Google has instituted, for many years now, 20% time. This is the policy that states that every Google employee, regardless of role or position, is allowed 20% of their work week to work on any project that they want to. Allowing the doers, talkers and that happy middle the opportunity to be creative, to be structured and to foster ideas, has resulted in many Google Labs projects.

Notably, some of the best Google products used today, have come out of 20% time projects: Gmail, Google News and Google Reader. Additionally, many features (such as keyboard shortcuts in a variety of Google products) have also been added to existing Google products as a result of 20% time. There is even a blind engineer who created Google’s Accessible Search product.

While doers are important, and talkers are important, finding a way to foster open communication and understanding between them is essential for innovation.

Aaron Brazell

Allow me to Complain…

Festivus was the other day, the traditional day that people “air their grievances”. Since I did not do that but I seem to be a bit fired up today, I’m going to separate from the normal informative, intellectual articles that would normally go up here, and instead rant a bit. Because there are a lot of things to rant about and I believe very good reasons for those rants to come. If you will allow me…

The Rooney Rule

The Rooney rule in the NFL is a rule that requires an NFL team to interview at least one minority candidate for an NFL coaching position before it can be filled. The principle is clear… there are not enough opportunities for minority coaching candidates so the NFL mandates the requirement.

The problem is, it does no good. It has become a thing of bureaucratic obstacles and a checklist item for franchises. Take the case of the Washington Redskins who are likely to fire head coach Jim Zorn in the next week after yet another abysmal performance.

During the preliminary process of hiring a new coach, the Redskins interviewed Skins Secondary coach Jerry Gray. Cool, cool. Except that it seems to have been done to fill a quota (yes, I used the Q word). Gray is not likely to get the job and probably never was likely to get the job but it was required that the Redskins interview a minority. Even the front page teaser of the article on NFL.com suggests the process is a quota-based process with the phrase, “Skins Interview Gray, Satisfy Rooney Rule”. Duh?

Search Neutrality

Search Neutrality is the bastard half-child of Net Neutrality. Net Neutrality, for context, is the Internet policy argument that states that Internet Service Providers (ISPs) should not be able to offer preferential treatment to higher paying customers. First let me go on record to say that I don’t necessarily support net neutrality, though there should be some regulation around Internet service provisions because it affects more that just carriers pissing among themselves.

Though I am not a fan of unfettered capitalism (thus my support for some regulation around net neutrality), two or more companies trying to make money should be able to create incentives to customers that would provide better services (or preferred service, if you will) to better (or more high paying) customers. This has existed forever. You have Airline frequent flier miles. You have Premium accounts over basic accounts. You have different versions of operating systems offering better features. Etc. Etc. Etc. The Internet is not a Constitutionally protected right and is subject to the laws of competition and capitalism.

Which brings me back to search neutrality. There is some buzz around the idea that there should be regulation around search engines that would prevent search providers (Google, Bing, etc) from having editorial policy (read: search algorithms) that provide more favorable treatment to some publishers over others. Or would prevent search providers from supplying paid placement opportunities to publishers in an agnostic fashion.

This, on its face, is wrong. Yet don’t underestimate some guy who has no idea how to organically grow search result placements (SERPs) to try to rally support from the ignorant to punish the evil empires of Microsoft and Google for exercising capitalistic rights and sound business opportunities. Let me be clear, any kind of neutrality buzzword derives from the inability of some to compete on merit in a marketplace. Can’t get SERPs… smack Google with a search neutrality policy that makes everyone, everywhere completely equal while we eat fruit from trees while riding our unicorns. It doesn’t happen this way, people. Competition is created by innovation and capitalism. Survival of the fittest.

Aaron Brazell

Google Chrome OS: A lot to do about Nothing

Google is known for doing many things right. Despite giving them a hard time over the years, it’s undeniable that my life still revolves, in a very real way, around Google products. I use Gmail not only for, er, Gmail but I use Google Apps to run all my email services including my public aaron@technosailor.com email.

Likewise, my analytics for this and other sites is Google Analytics (for those scared by big words, analytics is how I measure traffic and visitor interaction on the site). This blog, which is powered by WordPress, implements Google Gears to speed up transactions on the backend and Gears is used also to provide offline support to Google apps I run.

Google Search probably will never be replaced by Bing in my world.

My BlackBerry has a Gmail app and Google Maps, both of which I find to be imperative. Likewise, I use Google Talk for IM and I have apps for that on both my BlackBerry as well as my iPod Touch (The Jesus phone without the Great Satan called AT&T).

In other words, try as I might, I can’t not love Google for so many things.

Yet… I just can’t get excited about the announcement in recent days that Google is coming out with a new operating system, expected in 2010, that will be based on it’s Google Chrome browser (which I don’t use because it’s Windows only).
For all that Google has done right, they completely just insulted us and most of us haven’t even figured it out yet. We’re all caught up in Shiny Object Syndrome, the likes of which are similar to Applegasms surrounding a new “Our Father who Art in Cupertino” product launch. We’re just not thinking straight.

Here’s why.

Google Chrome is a Browser. While it’s a powerful browser, it is simply a browser. It cannot run applications. It cannot mangle CPU cycles, assign process IDs to other applications, or control memory allocation for an entire computer. It’s not built that way. It’s built to be a browser.

The evidence that Google knows this (and Fake Steve Jobs does a nice job of pointing out why Operating Systems take 20 years to build right) is that it plans to use a Linux kernel. There you have it. A Linux kernel.

Ah ha, you might say. Linux has been proven to be an exceptional embedded operating system over the years, and with that, I’ll agree with you. It makes perfect sense why Google would build their new operating system on Linux. It’s proven its ability over the years to be an operating system for many devices, computer and non-computer alike. Why change now. God, those kids are smart over at Google.

Here’s the thing… All of the technology pundits, and Google themselves, are calling it a new operating system… when it’s far from it.

In fact, Google should be calling it a new Desktop Manager similar to KDE, Gnome or, heck, even the desktop manager app that’s built on Open BSD for the Mac OS X software. The operating system is Linux. For what it’s worth, Mac OS X should probably be called a Desktop Manager software too because it’s built on BSD, a Unix variant.

There is nothing about an upcoming Google Chrome OS that can operate a system. Not within a year. That’s why they are using Linux.

I love Google, but folks need to step back and be a little objective. I mean, just a little.

Aaron Brazell

How Location Based Services Saved My Life

Sitting here in Automattic offices in San Francisco, I find myself lovingly caressing my Blackberry which, for a short time yesterday, I believed was separated from me for good. Turns out I lost it the night before and was having phantom spasms over not having it in my pocket to check email, twitter or do other activities I would normally engage in with my long-time partner and friend.

As I arose from my grogginess yesterday morning, my first instinct was to reach for my Blackberry to ascertain important overnight occurrences. You know, such as what drunken text messages I might have sent or had sent to me, what the final score was on the Red Sox game or who was talking about me on Twitter. It’s a hard habit to break so when I realized my phone was nowhere to be found, I panicked.

Then I remembered Google Latitude, the new mostly useless location based service announced earlier this year. Google Latitude has a small piece of software that can be installed on [supported] phones. It uses GPS or cell tower triangulation to pinpoint the location of a person. As I’m a Verizon Wireless customer, the only option I have is cell tower triangulation. So I can be pinpointed to an area.
Picture 10
In a stroke of brilliant genius, I logged onto Google Latitude from my computer in the hotel. There were only so many places the phone could be. The last place I wanted to see it was in the back seat of the cab that had given me a ride home the previous night.

Fortunately, I was pinpointed (inaccurately because it was more my phone was pinpointed in Fisherman’s Wharf at the In n Out Burger that I had enjoyed a west coast delicacy the night before. I thought.

Fortunately, upon arrival at the In n Out Burger, the store manager did indeed have my Blackberry and I was able to carry on with my life.

This is a great example of how location based services can actually be useful. Instead of simply inviting the stalkerati or providing an unnecessary window into the life of the user, it is a good way for employees or assets to be tracked inexpensively. If you run a courier service, company-issued phones with Google Latitude might be a handy way to streamline your business operations.

Google Latitude is not the only “homing beacon” service out there. Tomorrow, with the launch of the iPhone 3G S, Apple is also introducing “Find my iPhone” with MobileMe which will pinpoint the location of a lost or stolen iPhone. Clearly a different benefit to the argument of value surrounding location based services

Aaron Brazell

Changing the Currency of Influence via Search

There is no doubt that Google is the king of search but how did they become that way? In the old days (you know, before PageRank was dubbed irrelevant), the idea was that the number of links to a site, particularly by more “powerful” sites increased the relevance of an indexed page in the Google index. To this day, that philosophy holds, tho clearly the weight has shifted from “links of powerful sites” to “internal links”.

Google has not significantly adjusted how they determine the importance of an article, site or keyword in some time, tho they claim some 70+ algorithmic tweaks last year. And that’s fine. Google’s index is Google’s index. It has trained us how to search and what we expect when we search. It has taught us silently and we compare all other results to the Google results, despite the fact that Google results are in themselves arbitrary and based on their own determination via algorithm.

But I digress.

It’s interesting when new search engines or tools come out. It’s interesting to see the innovation as it takes place. One such tool that I discovered, almost by accident, does a good great job of building an index around links and pages passed around Twitter. This tool is Topsy, which combines Twitter Search with Google like results (in other words, the results are not tweets themselves).

For those of you not occupying your every waking moment on Twitter, it is by most objective measures, the new information aggregator – like RSS readers were supposed to be or portal sites try to be.

The currency of influence on Twitter can be summarized in two letters: RT (short for Retweet). Many bloggers are including the ability for stories to be “retweeted”, or redistributed on Twitter, and that is precisely what Topsy is measuring. (An example of retweeting capability on a blog can be seen on this blog – see that Retweet button at the end of the article?)

Much like Google set the currency of relevance based on links, an assumption that was valid at the time and still carries some level of validity today, Topsy has recognized that more influence is being distributed via Twitter and thus, a relevancy algorithm around this currency must be built.
Picture 5
I don’t know if Topsy is a “Google killer” or even if they strive to be one. My guess is, it will never supplant Google in our lives. However, an ambitious approach to this new distribution of influence is an important, and enjoyable, thing to watch.