Apart from owning all the searches that occur on the internet (seriously, does anyone use BING?), Google has the enviable position of owning two assets that competitors wish they had.

  1. The Underlying Network (Google Fiber).
  2. An advanced Machine Learning (Cloud ML, Tensor Flow) and BigData Platform (Hadoop was invented at Google).

Combine these two advantages, integrate them into your cloud services, and you will see how Google is outmaneuvering AWS and Azure at their own game.

The Fiber Advantage (It’s good to be King)!

You are familiar with database transactions. You know that making an operation transactional across a cluster of databases requires special support from the database platform. And it requires all the nodes to be physically co-located. If one node is in Asia and another in the U.S., all bets are off when it comes to transactionality of database operations. However, Google’s Cloud Spanner has achieved just that. How, you ask? Think of the underlying network between North America and Asia as part of one GIANT WAN. If you own all the pieces (fiber optics) of the underlying network, this is doable. And that is exactly what Google has achieved in both – Cloud Spanner and Cloud Datastore (a large key-value storage engine). 

Scaling RDS on AWS – Compare this to RDS in AWS which does allow for horizontal scaling – as long as you are concerned with READ ONLY replicas.  In other words, no transactional guarantee across the horizontal nodes. And no latency SLAs when you go across distant geographic regions, such as Asia and North America.

Scaling Azure – On the Microsoft side, SQL Azure  offers Elastic Database Tools (formally known as Elastic Scale).  These .NET client libraries help scale queries across multiple Azure SQL databases (sharding).  While this is a cool feature and allows distant located nodes, both, the sharding and the transactional guarantee need to be supported by your specific application. In other words, you can get ACID transactions and horizontally sharded data provided your application is coded to handle these. Older .NET apps for example, would not be eligible. 

Google is the King that owns all the land; it can make price adjustments and service level agreements, that others can only aspire to.  AWS and Microsoft need to lease the underlying physical network and, to a large extent, their SLAs and pricing are dictated by their underlying network.

Google’s Compute Engine and Load Balancer – Intelligent and True PaaS

Amazon changed load balancing with the advent of the Elastic Load Balancer. And while, AWS’ ELB mostly lives up to it’s name, in some crucial ways, it doesn’t.  I wrote a post earlier about  works as advertised and scales beyond expectation

  1. Google’s Load Balancer uses it’s proprietary ML algorithms to figure out which nodes to route more traffic to. AWS and Azure are not there yet.
  2. Compute Engine’s LB does not require pre-warming, unlike the AWS ELB. This means that traffic spikes are better handled on Google’s Compute Engine.
  3. GCE (Google’s Compute Engine) is a true PaaS offering. Compare this to AWS’ beanstalk which is essentially a PaaS-like service built atop an EC2 instance. This means that Beanstalk needs to have at least one EC2 instance constantly up and running, in order to function.
  4. Performance SLAs – Google’s touted 1 million requests / sec scalability test handled by it’s Load Balancer, is real.

The Security Advantage

AWS and Azure rely on networking and segmentation (such as subnets, security groups…) to isolate their services and instances. Google has a completely different approach (although network segmentation is also possible in GCP). Every service and every instance  within Google Cloud has a unique Cryptographic Identifier. Security is not just via network segmentation, it relies on unique Identifiers which are managed through individual policies.  This makes it virtually impossible for anyone who does not possess the right private key to get to GCP instances, regardless of whether they are exposed to the public or not.

Encrypted Traffic within Google Cloud

When you go from Data Center to Data Center within the GCP, all application level protocols (HTTP, FTP, HTTPS…) are wrapped in Google’s proprietary RPC protocol. This provides full encryption in transit between data centers. Even WITHIN a data center, Google is close to offering a similar, built-in encryption (anticipated 2017). Think about this. Anything you put up on GCP, whether in a single datacenter or across multiple datacenters, is automatically guaranteed encrypted traffic to and fro!

AWS and Azure, in contrast, do not take responsibility for the encryption of data in transit between VPCs or between regions or AZs. In both AWS and Azure, that is entirely upon the customer. Again, owning the underlying network allows you to impose your own proprietary protocols on all transit traffic.

Protection Against the Internet

All the biggest threats come not from within the GCP, but from without. To protect your GCP resources against internet attacks, Google minimizes the entry points to your Virtual Private Cloud (VPC).

Pricing and Cost Savings, Google’s Sustained Use Discount

Both Azure and AWS require you to make upfront decisions to get the most cost savings. And while AWS’ reserved instances are a great way to save on cost, it does require an upfront commitment. Google’s approach is – We don’t want you to worry about it upfront.

Google essentially monitors your ‘sustained’ usage – and automatically discounts it’s invoice based on longer usage.  The more you use their instances, the more discounts you get .

Static Content Storage and Caching

Static Content in AWS can be hosted in multiple ways – including the popular S3 buckets. However, S3 buckets are simply for storage, not for caching. The Caching of static content, in AWS, requires a separate service, Amazon’s CloudFront CDN. While this is a great service, it does require you to maintain two services for implementing a common use case – Cache frequently used static content !

Google figured that most people who want to store tons of static content would most likely also need a caching capability for frequently served content. Google’s Cloud Storage (equivalent of S3) already has CDN built into it, which means your content gets cached automatically.

BigQuery and the BigData Advantage

BigQuery is arguably the largest use case for GCP currently. It is an integrated and fully hosted data analytics platform which scales automatically to thousands of nodes. In contrast, AWS’s Redshift requires manual configuration to scale. However, in fairness, Redshift is a larger ecosystem with several mature adjunct services (such as FireHose for streaming data into Redshift).

Summary

While there are far too many sub-topics in cloud computing to do a full blown comparison, at the outset, Google’s Cloud Platform (GCP), has significant advantages over it’s competitors.  It has solved some of the toughest problems in distributed computing, not because it wanted to win the cloud war, but because it wanted to win the ‘search’ war.  Search, at it’s heart, is the ultimate test of distributed computing and machine learning algorithms. Couple Google’s Machine Learning headstart with it’s Google Fiber advantage, and you have a serious competitor entering the Public Cloud arena.

Thoughts? Comments?

Anuj holds professional certifications in Google Cloud, AWS as well as certifications in Docker and App Performance Tools such as New Relic. He specializes in Cloud Security, Data Encryption and Container Technologies.

Initial Consultation

Anuj Varma – who has written posts on Anuj Varma, Hands-On Technology Architect, Clean Air Activist.