Tag Archives: networking


The Internet is About to Become WAY Faster

Earlier this week, the big news in the tech space surrounded the completion of the HTTP/2 Spec.

Gibberish, gibberish, gibberish.

What does this mean for the internet? The short version is: it’s about to become way faster.

Faster is extremely important if you want to make money from traffic to your site. Or, you know, if you use the internet at all. No one wants to wait for a website that just sits there and spins while it’s trying to load. We all hate that. And importantly, Google hates that.

Several years ago, Google invented and implemented the SPDY protocol. Simply, SPDY allows for the compression and efficient transmission of requests between browser and server. I’ll get into the technical details later, but for the sake of summarizing why the Internet is about to get faster… Google’s SPDY technology is the cornerstone of HTTP/2.

Now that the spec is finalized, it goes to editorial for cleanup and publishing… but the nuts and bolts of the spec will not change going forward. That means browsers and servers can start rolling out these changes as soon as… today.

Technical explanation

If you are not interested in the technical explanation for HTTP/2, please skip to the next section. I don’t want you falling asleep and drooling on your keyboard.

In the original days of the internet, there was the HTTP/1.0 spec. This spec defined how clients (browsers) and servers communicated over a network. We still use 1.0 quite a bit today, though 1.1 is the current preferred.

When it comes to HTTP, the concept is simple… a client requests a resource (image, html, mp3, whatever) from a server, the web server interprets the request, goes back into a storage closet, finds the resource, has the requestor sign for it and then sends the resource on it’s way back to the client. A simple understanding is….

BROWSER: “Hey, server… can I get that image named image.jpg? You should find it in this folder.”

SERVER: “Sure, let me go look. Oh, here it is. There you go”

BROWSER: “Thanks, dude”

If the server can’t find the image in the directory, it sends the client a 404 (Not Found), but that’s just a sidenote.

HTTP/1.0’s Problem

The problem with HTTP/1.0 however, was that it only allowed for a single request on a single connection to the server. To put it in another way, The web page has 10 images on it. In order to get those 10 images from the server, it would have to send 10 requests. That creates a lot of requests for the server and if the server wasn’t optimized for that kind of capacity, it could crash… or at least be in the weeds. It was bad all around for bandwidth as well. All of those requests add up to high bandwidth costs!

So God gave us HTTP/1.1.

HTTP/1.1’s Problem

Given this inherent problem with HTTP/1.0, HTTP/1.1 enhanced the original spec by adding the concept of pipelining. Imagine, if you will, a highway tunnel. Generally, there’s no passing in a tunnel. While HTTP/1.0 only allowed a single car through the tunnel at a time, HTTP/1.1 allowed multiple cars in the tunnel, but dictated no passing. And because the web server on the other end of the tunnel can only process requests as they come in, you end up with a stacked up queue of requests waiting to get processed and… you still have the same problem where the server gets in the weeds and slows down.

In both the case of 1.0 and 1.1, server technology evolved to allow concurrency of requests. This gave us multiple lanes in the tunnel, but 1.1 still dictated no passing. So requests in one lane, had to stay in that lane but there could be more than one lane which allowed us to bandaid the inherent weakness in the HTTP/1.x stack.

Enter HTTP/2

When Google started giving SEO benefits to sites based on speed, they also ate their own dog food and invented SPDY. SPDY allows for compression of resources in a much more efficient way if both server and client supported it. It also allowed for single requests to get many resources at a time. That page that had 10 images and had to make 10 requests for those 10 images could now make a single request to get all 10 images at once. Efficiency, I tell you.

As with any working group, the task force that put together HTTP/2 had representation from Google. Google, as a good citizen, shared it’s knowledge and spec for SPDY with the working group and it became the basis of HTTP/2. In fact, Google will now eliminate SPDY in favor of HTTP/2.

In fact, clients are supporting HTTP/2 now. Well, a lot of them are anyway… and that’s because of Google’s implementation of SPDY. Internet Explorer 11+, Firefox 36+, and Chrome all implement SPDY-HTTP/2 support, but none are currently enabled by default. Safari and Mobile Safari will likely soon get the support.

Most web servers also implement SPDY-HTTP/2 with the exception of lighttpd.

What does it mean to me?

The new HTTP spec is probably not anything you need to worry about at this point. System administrators will want to make sure their web servers are up to date and their TLS certificates are upgraded. Though the working group does not require HTTP/2 to use TLS, I’d expect most server manufacturer’s to require them in their own implementation… for security reasons.

On the client side, HTTP/1.1 still works. The working group was very careful to ensure backwards compatibility with prior versions of the spec. So if your browser makes a 1.1 request to a 2.0 server, the server will still answer in 1.1 with the same limitations I described above.

As developers, we will most likely want to use 2.0 when we can. The finalization of the spec is so new that it remains unclear what that means yet. For instance, what does this mean for the WordPress WP_Http class? Probably nothing in the short terms, but I’d expect enhancements to start rolling in as optional “toys” for developers.

Are you a developer or engineer? What are your thoughts on the new spec?


TUTORIAL: Developing Locally on WordPress with Remote Database Over SSH

Today, I went about setting up a local WordPress install for some development I am doing at work. The problem that existed is that I didn’t want to bring the database from the existing development server site into my local MySQL instance. It’s far too big. I figured this could be done via an SSH tunnel and so, I set abut trying to figure it out. The situation worked flawlessly and so, for your sake (and for myself for the future), I give you the steps.

Setting up the SSH Tunnel

I run a local MySQL server and that runs on the standard MySQL port 3306. So as these things go, I can’t bind anything else to port 3306 locally. I have to use an alternate port number. I chose 5555, but you can use whatever you want.

The command to run in a Terminal window is:

ssh -N -L 5555: remoteuser@remotedomain.com -vv

A little bit about what this means.

the -N flag means that when connecting via SSH, we are not going to execute any commands. This is necessary for tunnelling as, we literally, will not execute any commands on the remote server. Therefore, we won’t get a command prompt.

the -L flag tells SSH that we are going to port forward. The following portion, 5555: combined with the -L flag means, literally, forward all traffic on localhost ( connecting on port 5555 to the remote server’s port 3306 (standard MySQL listening port).

The remote server and ssh connection is handled by remoteuser@remotedomain.com. This seems obvious, but just in case. You may be prompted to enter your SSH password.

The final part can be omitted, but I like to keep it there so I know what’s happening. The -vv flag tells the SSH daemon to be extra verbose about what is happening with the connection. It’s sort of a good way to debug if you need to, and to know that the port forwarding is actually taking place.

Configuring WordPress to use the Tunnel

Now that we have a successful SSH tunnel, you have to configure WordPress to use it. In the wp-config.php file, simply modify the DB_HOST constant to read:

define( 'DB_HOST', '' );

You need to add two more variables, though, to override WordPress’ existing siteurl and home options to allow you to work with the localhost domain, instead of redirecting to the remotedomain.com that is configured in WordPress.

define( 'WP_HOME', 'http://localhost' );

define( 'WP_SITEURL', 'http://localhost' );


With these configurations in place, loading up WordPress should now load in the database content from the remote host and you can get to work on local development. Word to the wise… don’t close the terminal window with the tunnel or the tunnel will be severed. If you have to minimize it so it’s not annoying you, go for it… just don’t close it.

Venture Files

TECHcocktail DC – The DC Tech Scene is definitely back

I have seen my share of networking events. Back during the dotcom era it was full of open bars and crazy companies with the latest software to change your life in some way. Then it was all about buying stuff on the web or a portal for something or another.

After the bubble burst most people were just trying to hold on and all that you had a choice between in the DC area were NVTC (Northern VA Tech Council) and Potomac Officer Club events. NVTC was very government focused and who mostly showed up were service providers (I have the 100’s of insurance and lawyer business cards to prove it). POC events were big events with well known people but not alot of good networking.

One good networking event I liked was the Tech Prayer breakfast but that was only once a year. What most of us were left with was going to conferences, usually not here, to get our networking on and find fellow entrepreneurs and real innovative thinkers.

Lately, there has been a change in the winds here in the DC area. With events like PodCampDC and Social Media Club’s events we are starting to see our cutting edge tech scene finally re-emerge. Last Thursday night it was totally confirmed with the TECH Cocktail DC event. It was held at MCCXXIII (1223) in DC. A swanky place that is over-priced for my usual weekend partying but this event had cheap drinks (thank you drink tickets) and about 300 people.

Below is a picture of the scene at the height of the evening.


While there have been many events that have drawn 400 people, this was different. Almost everyone was doing something startup related that was really cutting edge. There were social media people there (Technosailor and me included), innovative startups and actual investors looking to network.

There were also a great group of sponsors with great products to demo. Here is a great list from Jimmy over at EastCoastBlogging:

AwayFind – a product aimed at helping combat the email problem by letting your contacts get in touch with you via an online form.

iGala – a digital photo frame with a touchscreen interface that connects directly to Flickr and Gmail to stream photos to the frame like a slideshow.

Loladex – offers local recommendations from your trusted network of Facebook friends.

Odeo – launched a new beta verision which offers both search and personalized content (audio and video) recommendations.

Voxant – a free licensed content offering for publishers which offers a pageview based revenue share to anyone that embeds the content on a their site.

WhyGoSolo – a new social networking site aimed at helping you to create spontaneous new connections so, as its name implies, you won’t go solo any longer.

A huge amount of thanks go out to Frank Gruber and Eric Olson who do the TECH Cocktails around the country and they need to do it more than once a year here.

The vibe around this region is changing and since we will never will be Silicon Valley and never want to be, it is fantastic to see that there is a refreshed ecosystem of entrepreneurship here in the region.

Photo courtesy of jgarber

Editor’s Note: Some comments don’t seem to apply to this post as viewers of a show I was on were instructed to leave comments on this blog to get an invite to BrightKite. These comments will be approved but do not necessarily go with this post. Sorry!

Aaron Brazell

SXSW Recap

I’m back on Maryland soil now after changing my flight to come back home Wednesday instead of Thursday. It’s been a heck of a trip and I’m so exhausted. Nonetheless, it was one of the best trips I’ve ever been on. I’ll have to catch up on the sessions I wanted to attend but did not. (Last year, they were all released as podcasts after the event so I’m assuming the same will be done this year).


Amazing people everywhere. That’s the summary, as simplistic as that sounds. The overlapping of all my various circles and networks of people: DC folks interacting with Canadian friends interacting with the PodCamp circle of friends interacting with b5media folks. Not to mention the vast presence of my Twitter friends everywhere I looked. As I said, it was truly amazing.

The past few days, if you’ve been keeping up with this blog, you know that I’ve interviewed six fantastic folks: Brian Solis, “Pistachio” Laura Fitton, Frank Gruber, “Copyblogger” Brian Clark, Christina Warren and Rainer Cvillink. Obviously a very productive day. Those were just the quick video sit downs that I did. We also did our regular weekly District of Corruption live from Austin, appeared on a variety of videos and podcasts by Chris Brogan, Scott Stead, and Kris Smith to name a few.

Though I met many, many new folks this week, I was very pleased to get the opportunity to meet (for the first time), Shel Israel, Erin Kotecki Vest, Micah Baldwin, Grant Robertson, Christina Warren, and Mark Cuban. Yes… I did just say I met Mark Cuban. It was only for a brief handshake as he breezed through the Washington VC sponsored Rock Band party Tuesday night.


Old friends reconnected include the inimitable Loren Feldman, Brian Clark, Darren “Problogger” Rowse, Scott Brooks, Alex Hillman and, as usual, many more.

On a light note, I’m a little miffed that the bulk of the coverage of the “Beacon Sucks” heckler moment during Mark Zuckerberg’s keynote wasn’t properly attributed. Christina did, but CNET, Valleywag and the rest of the coverage did not. It was me, of course, which makes me either the voice of the thoughts of all of us or just rude. Not sure which. You be the judge.


I want to thank the b5media crew that made the event a lot of fun for me. Thanks to Steph Agresta (aka, Internet Geek Girl) for being the face and voice of the Bloghaus. I know you’re wiped out from it, but it was great and I hope for you it was worth it. Lijit and Outbrain for sponsoring the “b5 ranch” – yes it was a real ranch. Grant and Christina for dinner, drinks and so much more with myself and the b5’ers. It’s a pretty cool dynamic to work for a competing blog network and still be some of the coolest people around.

Austin, I’m out. You were wonderful. Until SXSW ’09, stay weird Austin (that’s a tee shirt I saw today).

Aaron Brazell

Virtual Solutions to Shared Network Resources and Email Solutions

I’ve been looking into a number of different resources for b5media. Namely, I’m looking for a better email solution for mail serving as well as an offsite backup and shared resources solution. As we’re a fairly large infrastructure spread globally, we have requirements as well.

Email Solution

Currently, we run all our mail locally through sendmail/dovecot and while this is a workable solution, it’s time consuming to have to go in and edit




all the time. Not to mention trying to get spamassassin to actually deal with spam appropriately. On the surface, I’m a fan of Google Apps for Domains. It allows a nice interface for adding email accounts. Forwarders is a short step from there. And of course, Gmail’s spam filtering is next to none. Add to that the ability to have POP/IMAP/SMTP access via Gmail and it becomes the workaround solution for most of our users.

Then of course, we use Googlegroups for mailing lists.
Continue reading