IoT
IoT
Cloud // Infrastructure as a Service
News
10/12/2015
09:05 AM
Connect Directly
Twitter
RSS
E-Mail
50%
50%
RELATED EVENTS
Core System Testing: How to Achieve Success
Oct 06, 2016
Property and Casualty Insurers have been investing in modernizing their core systems to provide fl ...Read More>>

Amazon Re:Invent: AWS Takes The Next Step

AWS announced a bunch of new services at Re:Invent, but more noticeable was the revelation of a more assertive Amazon.

10 IT Infrastructure Skills You Should Master
10 IT Infrastructure Skills You Should Master
(Click image for larger view and slideshow.)

It was a more confident, competitive Amazon Web Services (AWS) at this year's Re:Invent in Las Vegas, which wrapped up Friday, Oct. 9. That showed up in a number of different ways, both from the jaunty way Amazon executives conducted themselves on the stage, to the dj spinning tunes in the registration lobby, and all the way to the "swag" counter dispensing AWS hoodies to attendees.

Andy Jassy, the senior vice president most responsible for the business plan behind AWS's infrastructure technology, wasn't just talking up Amazon's new services as usual. He was also casually, and in passing, swatting at the competition.

As AWS wades into business intelligence, a field with many well-established players including Cognos, Information Builders, and MicroStrategy, Jassy said AWS had designed the user interface for its BI entry, QuickSight, to be easy for non-technical people to use. He then showed a user interface from "an old-guard BI vendor" that was "hard to use" and "rather janky." He didn't say whose it was but dropped the hint, "the name rhymes with hog nose."

Building on Success

Earlier this year, Gartner cloud analyst Lydia Leong doubled her estimate of the margin by which AWS leads it competitors. She said AWS has 10 times the compute power of its 14 largest competitors. Still, it's been reported that Microsoft and Google are growing at a faster rate, percentage-wise, than Amazon.

Given that Amazon is so much larger to begin with, it's hard for its growth rate to reach the lofty percentages of smaller competitors. So by what percentage rate is it growing? Instance use is up 95%; storage use, 120%; and database services, including caching and data warehouse, by 127%, said Jassy. That means AWS revenues, reported at $4.64 billion for 2014, are growing at 81%, Jassy said.

In case you didn't get the full message, "Amazon by far is the fastest growing, multi-billion dollar technology company in the US," he said.

(Image: JurgaR/iStockphoto)

(Image: JurgaR/iStockphoto)

We tend to think of the cloud as offering different sizes of virtual servers, like Starbucks coffee cup sizes. But it actually varies, both in size and operational characteristics, to continually appeal to some new type of customer in the market.

At Re:Invent, AWS introduced the T2.nano, a virtual server that's smaller than the T2.small or T2.micro. The T2.small consists of one virtual CPU (equivalent to a 2007 Xeon processor) and 2GB of memory for 2.6 cents an hour; the even smaller T2.micro is one virtual CPU and one GB of memory for 1.3 cents per hour. The T2.nano still has the single virtual CPU but only 512 MB of memory. All of the T2s use a "burstable" CPU, meant to run at steady state on one virtual CPU, a fraction of the total physical core, for a low-traffic website or little-used database. But AWS allows users to amass credits during these periods of low use, then burst into use of the full physical core when needed for short periods of time. Each credit is equal to a minute of use of the full CPU.

The T2 family can amass up to 24 hours of credits, a big cushion for future traffic, and an attractive operational characteristic for many owners of small websites and online business applications.

In July 2014, AWS chief evangelist Jeff Barr published a blog explaining credits and burstable operational characteristics.

So if Amazon is willing to go small, why is it playing so hard at the other end of the spectrum as well?

CTO Werner Vogels introduced the X1, supersize-me virtual server with up to four Haswell processor cores and 2TB of memory. That's 2,000GB of RAM, or larger than some data warehouses. Previously, Amazon's largest amount of memory with any instance was 200 GB. Who needs 2TB in a virtual server? Someone, apparently. The new virtual server also takes away Microsoft's bragging rights that it has the largest virtual machines available in the public cloud, as Scott Guthrie did Sept. 29 during a virtual event, AzureCon. He cited Azure's G series, with the high-end G5 topping out at 448 GB of RAM.

Probably the most overlooked item in the blizzard of announcements Oct. 7 was Jassy's quick reference to AWS Inspector. It's well known among cloud users that Amazon runs incoming workloads through an automated inspection, looking for malware or suspicious behaviors in the code. Inspector may not be the only reason, but so far there have been no reports of malware getting inside EC2 and then running amuck. Now Amazon is making Inspector available to customers. AWS will place an agent on the resources that a customer tags as part of one application. The agent will watch the network traffic, file system use, and active processes, looking for security or compliance issues.

Then this operational data is compared against a set of security rules and best practices. Its findings are grouped by severity and reported back to the customer. Amazon is making the service available on a preview basis. Making this service available for a customer's own use, when and for how long it chooses, may both expose hidden vulnerabilities that have crept into a workload and also save time on routine checks by the customer's security staff.

More information to come, said Barr, AWS chief evangelist, in a blog.

[Want to learn more about Re:Invent? See Amazon Securing IoT Data With Certificates.]

Finally, AWS spent a lot of time talking about database services -- for good reason. Amazon can keep expanding its database-as-a-service offerings. It already has Oracle and SQL Server as proprietary systems, and MySQL, Aurora, and PostgreSQL as open source systems. At Re:Invent, it announced the addition of MariaDB (high-performance MySQL) as a fourth open source system. It also offers its Redshift Data Warehouse, DynamoDB NoSQL system, and AWS Elasticache caching system.

Taking On New Competition

These services have proven so attractive to its customers that AWS senses the disruptive role it might play in the traditional database market. It launched its AWS Database Migration Service as a preview technology last week.

Jassy said Amazon is already a $1 billion database company based on its existing services. AWS Database Migration Service is likely to expand that figure because it can help move an on-premises system to the same system in the cloud or help migrate away from a proprietary system to a different system in the cloud. The latter gives customers a chance to test Jassy's assertion that AWS Aurora will give you comparable performance for a tenth of the cost. And don't forget open source PostgreSQL's ambitions to be an Oracle replacement by posing as a compatible system and look-alike to Oracle applications.

It's been notoriously hard to get a database customer to move away from an established vendor, partly because most customers have little confidence their data can be migrated smoothly into the schemas of the system of the would-be new provider. Amazon is trying to address that with its $3 per TB database move fee, using Migration Service with its free Schema Conversion tool.

Customers who like the idea of conversion should remember it's free to ship your data into Amazon. Fees apply when you want to take it out. Amazon is making a serious bid to build a database business based on hours of usage and nothing else. That means Amazon will be maintaining and upgrading its database systems but won't be charging an annual maintenance fee -- 20%-22% of a database license cost -- as proprietary vendors have.

This shift in the billing model is the agent for disruptive change with database customers, and AWS appears to think that over the last year, with the introduction of its Aurora system, that it's spotted a crack in dam that has always prevented any flood of such migrations in the past.

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
kstaron
50%
50%
kstaron,
User Rank: Ninja
10/23/2015 | 9:35:31 AM
2TB
Of course Amazon is going to start offering services for huge data storage. With big data becoming ever more ubiquitous and people wanting everything right now, it makes sense that someone out there might want 2TB to be able to run big data for a large company or a scientific project or what not. The data we need always seems to increase. Remember when we though 64 MB was huge?
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
10/13/2015 | 8:44:39 PM
This is no bubble
Ron, In exchange for the new cloud data center capacity, you have a vastly reduced enterprise data center building program. Personally, I don't think it's a bubble. I'm not sure Amazon, et al, will be able to create the capacity they're going to need fast enough for the next 2-3 years. 
Ron_Hodges
50%
50%
Ron_Hodges,
User Rank: Moderator
10/13/2015 | 12:53:23 PM
My concern is a bubble...
There seems a likelihood that there is a growing bubble in cloud capacity in the same way there was in telecom capacity, and the resulting implosion of the telecom bubble had devastating repercussions.  I am concerned that in a 5 year period or so, there will be so much excess data center capacity that there will be a shakeout if not a bloodbath.
Thomas Claburn
50%
50%
Thomas Claburn,
User Rank: Author
10/12/2015 | 7:30:05 PM
AWS
AWS needs a service to simplify its services. I wonder if anyone can name them all from memory and recount accurately what each does.
Multicloud Infrastructure & Application Management
Multicloud Infrastructure & Application Management
Enterprise cloud adoption has evolved to the point where hybrid public/private cloud designs and use of multiple providers is common. Who among us has mastered provisioning resources in different clouds; allocating the right resources to each application; assigning applications to the "best" cloud provider based on performance or reliability requirements.
Register for InformationWeek Newsletters
White Papers
Current Issue
Top IT Trends to Watch in Financial Services
IT pros at banks, investment houses, insurance companies, and other financial services organizations are focused on a range of issues, from peer-to-peer lending to cybersecurity to performance, agility, and compliance. It all matters.
Video
Slideshows
Twitter Feed
InformationWeek Radio
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.