Always New Mistakes

December 5, 2010

Localization Wars: Facebook vs Foursquare

Filed under: Technology — Tags: , , , , , , , — Alex Barrera @ 7:17 pm

Some months ago, Facebook unveiled their location strategy aka Facebook Places. It comes as now surprise that the next move from them was getting into location. After buying Friendfeed, copying Twitter, it’s the turn of Foursquare and Gowalla.

Most of the discussion going around is centered around the struggle between Facebook and Foursquare. Even though Facebook has been saying they’ve been working with them, it’s crystal clear, the new announcement has hurt them in a really bad way.

So, the big question, will Facebook outflank Foursquare? I’m afraid the odds of Foursquare winning this battle are rather thin. The key driver in the location space is the user base of these systems. The more people checking in and leaving geotags the more useful the system becomes. In that aspect, Facebook just dwarfs Foursquare.

Not only that, but to use those systems you need a specific app. Problem is that most people already have the Facebook app, getting into Foursquare would mean going through the extra hassle of downloading and configuring a new app, while the Facebook app not only has a much larger audience, but it comes preconfigured in most new smartphones.

Now, what could Foursquare do to fight back? It’s clear they really need to differentiate themselves from Facebook. They need something that you can only get on Foursquare and not from Facebook. I’m not sure if that’s something they’re going to be capable of doing if Facebook plays their cards correctly. Any new feature that gets any traction can be easily replicated by Facebook. They only possible way is to make something so different from the Facebook genoma that they won’t be able to replicate it because it goes against the Facebook strategy.

For example, even though they tried to transform the news feed into the Twitter feed, the Twitter audience kept using Twitter. The main reason is in how different the follow/followers dynamics are from the friend/no friend dynamics of Facebook. That simple thing is what really prevents Facebook from collapsing Twitter. The thing is that Facebook can’t change the way the deal with social relations in an easy was because that would mean changing the core dynamics of the company. Something like that is what Foursquare should pull out of their sleeve if they want to stay alive.

Many people have been saying that what the location space needs is a shared geolocation database, maintained by several companies so that each company could focus on developing cool things on top of it. With Facebook on the loose, they’re saying that Facebook could become the maintainer of that database. The problem is that, while the original hypothesis made sense, the Facebook one is extremely dangerous. The first one assumed several guardians, the second one just one, Facebook. Needless to say that having Facebook control the whole location data is like handing the keys of your kingdom. Information and data is key here, even though it’s a tedious task, it’s critical to build features that allow you to differentiate from Facebook.

Let the geowars begin!! Any comments? Insights? Should have Foursquare planned for this attack before hand? What do you think?

Images: cnet, jamesnorquay.

July 20, 2009

Scalability issues for dummies

Filed under: Business, Technology — Tags: , , , , , , — Alex Barrera @ 2:34 pm

Every once in a while I get people asking me what’s taking me so long to open my startup Inkzee to the public. They also ask me what exactly have I been doing as the web seems exactly the same. I normally answer that things aren’t easy, that it takes time, specially if you are alone, like I am. After a while I end up explaining my problems with scalability and that’s the point where people just can’t follow you. I’m going to explain here what are scalability problems and how deep the repercussions are for a small company.

Most web applications, like Inkzee, Facebook, Twitter, … are made of 2 parts. What we, the tech nerds, call frontend and backend. The frontend is the part of the application that’s exposed to the users, that is, the user interface (UI), the emails, the information that is shown. All that UI is a mix of different programming codes, let it be PHP, javascript, html, etc. The frontend is in charge of drawing the UI on the user’s screen and to display all the information the user is expecting from the application. But this information has to come from somewhere, well, that’s the backend.

concentro-rackable-data-center

The backend are all the programs and software applications that run behind the scenes and that are in charge of generating, maintaining and delivering the information the frontend displays to the user. The backend can be very homogeneous or very heterogeneous, but it’s normally comprised of 2 parts, the database (where the information and data is stored) and the software that deals with that database, does the data crunching and connects this to the frontend.

Now, some web applications have a barebone backend, very simple and light weighted. Normally some software that gets what the user inputs on the interface and stores it in the database and viceversa, retrieves it from database and shows it to the user. Other web applications have an extremely complex backend (i.e Twitter, Facebook, …). These not only manage the data retrieval, but have to do really complex operations with the data. Not only complex, but very expensive operations in terms of computational power. For example, each time a user uploads a picture to Facebook follows this path:

  • The picture is stored in a specific hard drive. The backend has to determine which hard drive corresponds to that user (yes, there are multiple hard drives and each one is assigned to a bunch of users so the load is distributed).
  • Once stored, the picture is sent to a processing queue where it will be turned into a thumbnail by an image processing software. This process is expensive as it has to analyze the picture and reduce it to a smaller representation if the image but still maintaining part of its quality.
  • After processing it, the backend stores the newly created thumbnail in the database and stores, both the picture and the thumbnail in an intermediate “database” in memory for faster access (cache). This is because it’s faster to retrieve data from memory than from a hard drive.

This is an approximation of what a picture does when you upload it to a social network. I’m pretty sure it goes through a lot more processes though. So, supposing 1% of a social network’s users are uploading pics at any single moment, imagine uploading ~20 photos per user, 2.5 million users at the same time (Facebook has around 250 million users currently). Trust me when I tell you, that’s a lot of data crunching.

The problem

The best user interfaces (frontend) are designed so that all that complexity that goes behind the scenes is never showed to the end user. The problem is that the frontend depends gravely on the backend. If the backend is slow, the frontend won’t be able to have the info the user is requesting or expecting and it will seem SLOW to the end user. Not only slow, but in many cases inefficient or just not available to use at all (meet the Twitter Fail whale :P).

whale

So, now, what will cause the backend to be slow? Ohhhhhh don’t get me started!! There are so many reasons why the backend might be slow or broken! But, most of them are triggered by growth. That is, as the web application is being used by more and more users, the backend will start to fall apart. That’s what, in the tech world is known as scalability problems. That is, the backend can’t scale at the same speed the users pour into the application. The problem is that it’s not only a problem of more users, but having users that interact more heavily with the site. For example you might have 100,000 active users but never had experience big scalability problems. Suddenly you release a feature that allows your users to share pictures more easily… BAM!! Your backend goes down in 10 minutes. Why!! Why?!! you might scream while you watch your servers go down in flames. After all you have the same amount of users, so what happened? Well, most probably your backend system that handles picture sharing was designed and tested only with few users. Now it chokes with the big deal.

scal_image06

The REAL problem

Once you have scalability problems, the next logical step is to find where the bottleneck is and why is it happening. This, which might seem very easy, isn’t at all. It’s like looking for a needle in a haystack. Big backends are normally screamVERY complex with many parts coded in different programming languages by different persons. Not only that, but sometimes problems arise in different parts of the backend. So after a couple of really stressful hours you find the bottlenecks and think of a solution to fix them. Ahh my friend, then you realize it’s not as easy to fix as you thought. First of all, you have no clue if the fixes your team has come up with are good enough. Why? Because you’re stepping into unexplored territory. Few persons have had to tackle a similar problem and even less people have dealt with your data and systems. So even if you find someone else with the same problem, the solution might be slightly different depending on what systems you use for your backend or which architecture you have. This is the point where you realize that developers aren’t engineers, but craftsmen and that fixing these problems isn’t exactly a science but black voodoo magic.

So, here you are, with a bunch of possible fixes to a problem but with no clue if they will really work or it will just be a patch that will need extra fixes in 2 weeks. Normally you try to benchmark the solutions, but that’s not an easy task, specially because you have no real load to test it against except in your production servers and no, you don’t want to fuck the productions servers more than they are.

Finally, after some black magic and some simple testes you cross your fingers and try the fix on the production servers. After several hours of monitoring the backend for new “leaks”, you scream of happiness as the patch seems to work. Then you start to realize that the patch won’t hold on forever and that you need some extreme solution to the problem.

You sit down with your tech team (our on your own as it’s my case 😦 ) and you start drafting a new solution. Suddenly you realize that the best fix implies changing the way your backend works. And by change I mean, you need to redevelop a big chunk of your backend to fix the problem. This implies a couple of things, you’ll need to invest a lot of time and resources, you’ll loose the stability your backend had (prior to the incident), you’ll walk into a new unexplored territory for your team and worst of all, you can’t just unplug your production servers and change the backend, you need to do it so both backends coexist for a while until you switch all of your servers from using the old one to the new one.

Now, the REAL problem is that this change, this new redesign grinds the whole company to a halt. All msntv-tech-teamresources, let it be people or money are invested in redesigning efforts so nothing new can be done. Most outsiders just don’t understand the depth of this change and will bash the company for not doing new things, for not releasing new features, for not fixing old bugs, etc. Not only that, investors will start to get anxious and will demand things to start moving. So, the outside world only sees that you’ve stalled, while the inside teams are suffering the pressure. Not only that, developers inside the company will get extremely frustrated by the pace of things. They won’t be able to add new features and even when fixing bugs they’ll need to fix them twice, one in the old backend, one in the new backend.

So, in the end, you realize the shit hit the fan and you got all of it. It’s hard, very hard to be there. If you haven’t experienced it you have no idea how hard it is. Not only as a developer but as a founder, CEO, or executive position you’ll feel the pain. You won’t be able to publicize your site cause more stress might accelerate the old backend problems, you can’t give users new features because you have no resources, you will try to explain the problem to investors but they won’t understand a clue of what you’re talking about… “backend what?”. Current customers will be pissed at you because the site is running slow and you are doing nothing to fix it. So, in the end, everything freezes until the new backend is in place.

How long does this takes? Depends. Depends on the size of the redesign, the size of the tech team, the skills of the team and specially, the skills of the management. During this phase, management must execute impeccably. Sadly, this is not the case in most places and so priorities are changed, mistakes are made and the redesign gets delayed over and over again.

It takes a very good leadership to make it through this period. Someone that knows where their priorities lie and that is able to foresee the future and the importance of the task ahead. Needless to say that such figure is lacking in most companies. That’s the reason it took so long for Twitter to pull their act together, to speed up Facebook, etc.

I am there, I am suffering the redesign phase (twice now). It’s hard, it’s lonely, it’s discouraging and frustrating, but it needs to be done. I just wrote this post so that outsiders can get a glimpse of what is it to be there and how it affects the whole company, not just the tech department. Scalability problems aren’t something you can discard as being ONLY technical, it’s roots might be technical but its effects will shake the whole company.

Let there be light 🙂

July 6, 2009

Is the cloud the beginning of Skynet?

Filed under: Technology — Tags: , , , , , , — Alex Barrera @ 6:54 pm

I recently went to see the latest Terminator movie, Terminator Salvation. I have to say that I’ve always been a great fan of this movie, even though I don’t really believe in such a catastrophic future. Nevertheless, after watching the movie, which was pretty decent by the way (a little soft at the end though), I start thinking about how smart Skynet is depicted in any of the Terminator movies. I though, hey, if you could just nuke the datacenter where Skynet is, you would eliminate it. But then I started thinking about cloud computing.

terminator

For those unfamiliar with cloud computing (those familiar can skip this paragraph), it’s basically a new way of using computational resources (I’m oversimplifying the idea here though). Instead of buying or renting servers to deploy any web application, you rent computational power from a provider and pay by the hour. In a simple way, companies with spare computational capacity on their own servers, will rent you that time for you to use. There is no need to buy expensive hardware or maintain it. Instead you use the computational units the time you need them and as many as you need. That way you can take care of temporal spikes of usage in your applications by means of using more computational units and switching them off after the spike. The cool thing about it is that you don’t need to care about the underlying hardware you are using, nor the replication of your data. That is, the cloud system will maintain several copies of your data transparently so that if you loose data, you’ll still be able to recover it.

Cloud_Computing

So, back to Skynet. Most cloud computing systems are built so that they are extremely reliable, that is, if any of the servers that are used fails, the system will switch to a new server transparently. The end user won’t even notice the underlying hardware had a problem. The same happens for the data too. Those advances are part of a field known as high reliability and, although it’s not perfect, they are getting there. In a close future, few web applications will experience downtime because of faulty hardware or problems in the datacenter (like the recent lightning that struck an Amazon datacenter). That means that servers will be so extended that even if you nuked one of the cloud provider’s datacenter, systems won’t go down. Most probably a bunch of other datacenter all over the world will take over and you, as a user, wont notice anything.

Now, if you think about Skynet and strip out the AI, the backbone of it is just what cloud computing is trying to achieve right now. How many years more will we need to build a system that has not a single point of failure? Scary thoughts…

June 18, 2008

Next generation search engines

Filed under: Technology — Tags: , , , , , , , , , , , , , — Alex Barrera @ 2:31 pm

I was reading Scoble’s post about Windows Live Search and I realized what the future of search is going to look like (or so I think). I realized that the users don’t know how to express in a written way what they are looking for. Most of the times, you type a couple of keywords that should, theoretically, yield some results from which you can identify the one you are looking for. Human powered search engines like Mahalo have the same problems. They rely in human beings building pages with the most relevant information about a topic, but if you are looking for something not that common you’ll run into problems. Last but not least, semantic search engines like Powerset are closer to the goal, but there is still a big hurdle in the user’s way. How do you phrase, as a user, the information you are looking for? You need to type a phrase, but it’s not that obvious what that phrase should be, making it hard and slow to search things.

Now, the big problem again is writing down what are you looking for in a way the search engine understands it. How about another approach? How about a search engine that reads your mind so that it knows what you are really looking for? Most readers must have had a good laugh with the former statement but I have to say that mind reading devices are a big reality with their own field of expertise called Brain – Machine Interfaces (BMI). Several gaming companies are already using these devices to allow their players to control virtual avatars with their minds.

And how do these devices work? Generally speaking, it’s a helmet that reads neuron impulses in several areas of your brain. In the gaming example, they read the brain areas dedicated to movement, mapping neuron firing patterns to an specific movement in the game. This technology is still giving its first steps in the commercial arena, but I’m pretty sure  we’ll see more and more devices working with it.

Now, is it a big stretch to say that we can use similar devices to read our search intentions? It is indeed, it’s something that is still out of reach. Not because of technology but because of a lack of Neuroscientific data that can be use to pinpoint which brain areas we use when searching online. But it’s just a matter of time (I’m talking about 5 to 10 years here).

Big problems with this type of search, you not only need a web index, but a neuron firing pattern index and an engine to understand them and translate that into a web search query. Another big issue is brain privacy. Your neuron firing patterns would need to be transmitted through the Internet and stored somewhere. That’s a source of major privacy concerns that should be address before using a search engine like this.

Nevertheless, and with all the problems than might arise with an idea like this, I truly think we’ll someday see something like this and I have to say it will be awesome. I don’t know if any company is currently investing in developing a mind controlled search engine, but it would be a great project for a big company like Google, IBM or Microsoft.

Do you like the nextgen search engine? What problems do you see with it? Would you use something like that?

January 30, 2008

Because every word counts: Twitter experiences

Filed under: Business, Technology — Tags: , , , — Alex Barrera @ 3:47 pm

Recently I started using Twitter. I must confess I wasn’t very fond of it. I just didn’t understand what use I couldtwitter.png get out of it. Even though I’m still not a great fan of the service, I have to admit that it gives me some value. Many people try to describe Twitter, and most of them end up saying that it’s like a chat (irc, icq, etc.). My own definition would be that “Twitter is a slow motion chat where you get to decide who talks in it“. The key and really interesting part is the decision of who talks in the chat. For me that’s a huge difference between irc and Twitter.

From a business perspective I use Facebook to see what key people in my industry are doing. I can monitor which events they are going to, with whom they are talking, what posted items they are sharing. Again, the good thing about Facebook is that I choose who I want to be friends with. Nevertheless, one of the differences between Facebook and Twitter is that, for Facebook I always need the friendship to be approved, while on Twitter (except for protected accounts which are rare) I can follow whoever I want.

As for the quality of the information I must say that it’s just different. If you want to write about something and it’s long enough (be it more than 2 phrases) you’ll probably write it down in your blog. But if it’s just a link you want to share or an idea about something, you don’t have a tool to share it to a wide audience. Granted that you could write it as a blog post, but you risk burning your readers with a high frequency of posts with very few content. So, that’s where Twitter gets into action. It allows you to post your short musings to a different kind of audience. Getting back to the quality of the information, the good part of it is that you get to choose high profile twitters that you think might say or share interesting things. For example, Martin Varsarsky, Jeremiha Owyang or Mike Butcher are good examples of that. Again, if you don’t like someones content you can always “unfollow” them with no repercussion.

Finally, while reading a book from Ricardo Semler (Angel, thank you so much for the recommendation), I read a very good quote from Mark Twain: “I’m sorry I wrote a long letter, I didn’t have time to write a shorter one“. It holds an awful big truth, it’s harder to write small but meaningful texts than big cluttered ones. So that got me thinking about Twitter and its repercussions on heavy users. How will a 140 character restriction will transform there way of writing and even thinking? I suppose this is something we won’t see at first, but in the long row. I know that I’ve changed the way I listen to people. I’m so used to crawl hundreds of blog posts a day that I look for the essence of things and only if I like the essence, then I’ll read the whole post. This way of working is transcending into my offline life. Now I always find myself telling people to cut the crap and to get to the bottom line (I must say that people in general and in Spain specifically talk, way too much and say way too little).

I also think that, in the same way bloggers evolve and the way they write posts change with time (for better I hope), the same principle applies for Twitter. At first users just write about there life, and then they start to shift away from that and into a more information rich environment (this doesn’t applies to everybody though).

In conclusion, Twitter covers a different niche than blogs or Facebook dies and it targets a different audience. That being said, I recommend people that consider themselves information junkies to give it a try if you haven’t. You can follow me on my Twitter account and hopefully I’ll start changing what I write there. Twitter should read: “What are you thinking?” instead of “What are you doing?”.

January 24, 2008

The Explanation Problem and why we suck at it

Filed under: Technology — Tags: , , , , , — Alex Barrera @ 12:57 pm

I’ve come to believe there is a huge knowledge gap between early adopters and “the rest”. Most of the early adopters are very tech savvy and have a special gift for understanding how things work. I confess I’m an early adopter myself and even though I love to be one, sometimes I feel I’m very disconnected with the rest of the world. This is specially true if you are an entrepreneur and you need to pitch your idea to investors, friends, potential users, etc. And of course, most tech entrepreneurs are early adopters, so it’s common.

question-mark.jpgThe problem arises when you are talking about something normal to you, be it Twitter, Facebook or Seesmic and part, if not all, of your audience isn’t an early adopter. You’ll probably get questions like “Sees.. what?. Suddenly everybody is looking at you demanding an explanation because, of course, 99% of them had the same question. Now, for someone that lives and breaths technology, it’s very hard to explain a new concept or service. We tend to use other tech terms to aid us in our quest to explain something which in the end just confuses your audience even more.

Of course, the reason behind this is that early adopters don’t think in the same way the rest of the world does. They’ve been absorbing such a huge technological background that they think everybody is like them and that everybody is capable of going from A to C without talking about B (because of course, B is rather obvious… or is it?). This problem isn’t unique to early adopters but to many other professionals like doctors or architects.

Now, I was reading Lee LeFever’s latest blog post where he gives the key for the explanation problem. I have to confess it was an “AHA” moment. Lee just nails it! He explains:

Often, when someone asks “what is…”, they really mean “Why does it matter to me?” By considering what matters to someone, the answer becomes different and more likely to give them information they can act on.

Why does it matter to me? What is that going to give me? How can I use that to make my life easier? These an similar questions are the cornerstone of this vexing problem. If you, like me, are used to deal with your family’s computer problems you might be aware of how hard it’s to explain our world to non tech geeks. Personally I tend to over explain the question instead of answering “yes” or “no” which is what really matters to my speaker. From now on I’ll try to answer asking me first the “Why does it matter to me?” question.

So, why this post matters to you? Because it will enable you to explain your knowledge to a wider audience.

Image credit: scanned.wordpress.com

Blog at WordPress.com.