Learning to Navigate Multi-Cloud at ESCAPE/19 and ONUG
Expert insight on how to address challenges and leverage advantages of multi-cloud.
The complexities of multi-cloud deployment might give some organizations pause before exploring this route to digital transformation — the input from the experts that follow could shed some light on what to expect. A chorus of professionals came together recently at two separate conferences in New York where they shared experiences and ideas on multi-cloud to help organizations choose their next steps.
ESCAPE/19 and the ONUG fall conference were unaffiliated with each other but held on the same days and a subway ride apart. The timing and proximity of these conferences show a heightened interest in the discussion on how best to implement multi-cloud and hybrid infrastructure.
The ESCAPE/19 multi-cloud conference, presented by Cockroach Labs, unsurprisingly focused on multi-cloud topics and included speakers from HashiCorp, Riot Games, Docker, and LightStep. Meanwhile ONUG (The Open Networking User Group), an association of enterprise IT leaders, put on a conference that covered software, networking, and digital transformation themes with multi-cloud in the mix. The lineup of speakers came from such companies as Intuit, Microsoft, IBM, VMware, and Cisco.
Both conferences offered insight from the trenches of the multi-cloud arena, with particular attention paid to identifying what does and does not work. The slideshow that follows presents a taste of the knowledge shared and questions raised about what may be ahead for organizations that hope to find some advantage by going multi-cloud.
Spencer Kimball, co-founder and CEO of Cockroach Labs, led the speakers at ESCAPE/19 with a discussion on how to deal with applications that run on the global stage through multi-cloud environments. He said that many organizations now look first to cloud-based options when they launch. “They don’t even think about shrink-wrapped software anymore,” Kimball said. “It’s really about cloud-first applications being served to their users.”
The increasingly global makeup of the cloud, he said, is offered through on-premise servers, the major vendors, by regional telecom, or even private data centers. Kimball said this trend also means the playing field is more level for organizations of all sizes. “A startup that has five people just out of the gate has access to the same kind of data center resources that Google had back in the aughts,” he said. “The big problem is a lot of these technologies don’t work well together.”
There can be issues with networking, discovery of services, and connecting through layers of abstraction to let pieces of architecture talk to each other. Security with encryption and key management and Kubernetes-driven deployment can also be confusing, he said. “The problem is that Kubernetes itself doesn’t work very well across clouds and multiple data centers.”
Kimball said the handling of data in the cloud represents an unsolved problem in this environment, which Cockroach Labs has focused on for the past four and a half years. “It’s really the critical element in a fully stateless global service,” he said.
Other technological constraints, Kimball said, also affect the cloud. For example, network speeds and services continue accelerate but their limits can have an impact on deployment. Access and distance from data centers can choke the rate at which the cloud is tapped. “You really do need to eliminate geographic latencies,” he said. “That means if you have to, put the data next to the customer.”
Figuring out what multi-cloud actually means can be unclear because of the different ways the term is applied, said Armon Dadgar, CTO of HashiCorp, a provider of infrastructure management tools for developers and security pros. Many of the largest organizations maintain most of their infrastructure on-prem, he said, and might take baby steps such as adopting a single cloud provider for a controlled portion of their operations and then deem that “multi-cloud.”
Dadgar said he believes such companies will eventually open up to more services as they do business, whether by intent or circumstance. “Through mergers and acquisitions, if they buy a company that is in a different cloud and now they are multi-cloud by accident,” he said. “At a certain scale, there is enough M&A activity taking place that it’s impossible that every company you buy made the exact same infrastructure choices you have.”
Increased portability of data and workflow can be a key benefit of the cloud, Dadgar said, but consideration must be given to how various elements will interact. A new implementation might break an application, for instance. The needs of flexibility might conflict with reliability and vice versa. “Workflow portability does not necessarily require or imply data portability,” Dadger said. “These two can be very distinct.”
Jeremy Hermann, previously head of machine learning platform at Uber, sat with software architect Mike Shindle of Riot Games to discuss some opportunities and challenges for data and applications as enterprises take on multi-cloud strategies.
Riot Games is the developer and publisher of League of Legends, an online battle arena game that Shindle said sees about 100 million active players monthly. Daily, 8 million concurrent players engage with the game at any given time, he said. Attracting and keeping such players requires extensive reach that the cloud can provide. “We really want to be where the players are,” Shindle said. He proposed that multi-cloud is now required for the enterprise, not just a notion. “You can’t get away from it,” he said.
In the case of Riot Games, the company previously established private data centers to allow it to provide its services to players. As the game grew in popularity and usage, it became necessary to work with other publishers in other regions, Shingle said, and that meant being on their clouds. “In an effort to just be where we wanted to be, we had to go multi-cloud.”
Hermann said that Uber likewise started with private data centers but as the company grew it started to embrace the public cloud. “We recognized that operating everything on-prem put a certain drag on the product engineering team in terms of flexibility,” he said. Taking advantage of the cloud helped the company scale up and introduce new capabilities, Hermann said.
Not every technology trend is met with open arms — there can be fears of calamity and upheaval when legacy systems change. New trends might go against the grain of longstanding policies and procedures within some organizations, which may trigger resistance. At the ONUG conference, venture capitalist Eric Hippeau, partner with Lerer Hippeau, gave his perspective on innovation cycles that led to the current state of the “consumerization” of technology and how it affects the enterprise.
Like most shakeups, the move to modernize can raise concerns within organizations, but change may be inevitable. “So many people around the world now have a computer in their pocket and want to do everything by using their phone,” Hippeau said. That includes consumer technology invading the workplace, which can create a conundrum for organizations. “IT departments, for good reasons, are somewhat slow to adopt the latest trends,” he said. “They have so much work to do just to maintain what they have today. What we’re seeing is a decentralization of technology within the enterprise.”
There are some changes taking place that Hippeau welcomes. “Data is the digital currency that makes everything happen,” he said. “Artificial intelligence, in the next five years, will be deployed broadly. The fuel for AI is data.” He sees the wide deployment of AI as a solution for a longstanding issue many organizations face. “I’m always surprised how difficult it is within a big organization to get to the data,” Hippeau said. “It’s fragmented. It’s siloed. All of that is going to have to get freed up and you’re going to have to mix that data with big data marketplaces.”
He also doubled-down on the necessity for organizations to put AI to work to complete such tasks in the future. “If you’re not into AI, if you don’t have data scientists onboard, if you’re not thinking about the application of AI, you will be at a disadvantage later.”
A group of experts from the financial services and technology fronts discussed whether it is practicable to apply cloud-based security tools to on-prem in a hybrid cloud environment. The regulatory demands that govern the financial sector can lead to substantial hurdles when it comes to migrating to the cloud. Security monitoring via the cloud might gain traction if it can measure up to regulatory standards, but enterprises may have questions before giving it a shot. Ernest Lefner, ONUG co-founder and senior vice president with Ernst & Young (EY), moderated the panel comprised of Lane Patterson, co-founder of EdgeUno; Bruce Pinsky, distinguished engineer with Intuit; and Harry Roberts, managing director of digital transformation and cloud services, with EY.
Patterson said hybrid cloud security is still new territory for many organizations. “A lot of people are just figuring out how to use one cloud,” he said. EdgeUno is a provider of datacenter and managed cloud services for companies looking to expand in Latin America. Enterprises might want to emulate the approaches taken by Silicon Valley startups, Patterson said, but not overnight. He sees ways for such organizations to ease into such waters. “I’ve been a big believer of the trust model for service mesh security,” he said. “Istio and Envoy are open source examples of that framework.”
Some organizations are ramping up their cloud presence as part of their migration plans. Financial software company Intuit had plans to make its data center more manageable and scalable, Pinsky said, but that has evolved to taking full advantage of the cloud. “Over the last three years, it’s been in the works to get all of our workload out to AWS,” he said. Once that migration is complete, the plan is to shut down Intuit’s data centers.
When organizations implement hybrid or multi-cloud strategies, they look for the best ways to drive value, said moderator Sesh Iyer, managing director with Boston Consulting Group. He asked a group of technologists in a town hall setting at the ONUG conference for their perspective on how to approach transformation plans in ways that can deliver results.
The panel was made up of Hillery Hunter, CTO for IBM Cloud; Chris Wolf, CTO for global field and industry at VMware; Scott Harrell, senior vice president of enterprise networking business at Cisco; Sinead O’Donovan, partner director of program management at Microsoft; and Brian Johnson, CEO of Divvy Cloud.
The expected scale of adoption and deployment in the coming three to five years for hybrid and multi-cloud was up for debate with no consensus to be found. “Five years in terms of technology is a really long time,” Harrell said.
There was talk of increasing interest in interoperability, working across different environments, and how to manage the complexity that would come with it. “To VMware, open source is an important part of our strategy going forward to solve these problems,” Wolf said. The trick though is to not over-anticipate how projects will evolve. “We don’t want to predestine an application’s journey at the time we create the application,” he said. “We don’t know what the future may hold but we do know by history that we should expect change to happen.”
Having the flexibility to react to change, Wolf said, means having the ability to run an application in different places. “This is where hybrid cloud is really important,” he said. “You should have a hybrid cloud layer where I can run not just any application or any open source project but also any cloud service involved.”
O’Donovan said a dev-centric focus could help promote such flexibility when designing new cloud data applications. “Think about the APIs the developer writes to and where can they run,” she said. Having a model of consistent APIs that can run and work anywhere, she said makes it easier for the developer to create desired services. “The system needs to be smart enough to make decisions on what should run where,” O’Donovan said.
For example, Starbucks runs multiple IoT devices, she said, such as coffee dispensers connected to Microsoft Azure, which serves as the local compute in retail locations. This creates an opportunity for Starbucks to build a service around the technology. “Some of it runs in the store. Some of it runs on the network edge. Some of it runs in the region,” O’Donovan said. “What you don’t want to do is make it so complicated for the developer to run that app that they get overwhelmed or need special skills.”
When organizations implement hybrid or multi-cloud strategies, they look for the best ways to drive value, said moderator Sesh Iyer, managing director with Boston Consulting Group. He asked a group of technologists in a town hall setting at the ONUG conference for their perspective on how to approach transformation plans in ways that can deliver results.
The panel was made up of Hillery Hunter, CTO for IBM Cloud; Chris Wolf, CTO for global field and industry at VMware; Scott Harrell, senior vice president of enterprise networking business at Cisco; Sinead O’Donovan, partner director of program management at Microsoft; and Brian Johnson, CEO of Divvy Cloud.
The expected scale of adoption and deployment in the coming three to five years for hybrid and multi-cloud was up for debate with no consensus to be found. “Five years in terms of technology is a really long time,” Harrell said.
There was talk of increasing interest in interoperability, working across different environments, and how to manage the complexity that would come with it. “To VMware, open source is an important part of our strategy going forward to solve these problems,” Wolf said. The trick though is to not over-anticipate how projects will evolve. “We don’t want to predestine an application’s journey at the time we create the application,” he said. “We don’t know what the future may hold but we do know by history that we should expect change to happen.”
Having the flexibility to react to change, Wolf said, means having the ability to run an application in different places. “This is where hybrid cloud is really important,” he said. “You should have a hybrid cloud layer where I can run not just any application or any open source project but also any cloud service involved.”
O’Donovan said a dev-centric focus could help promote such flexibility when designing new cloud data applications. “Think about the APIs the developer writes to and where can they run,” she said. Having a model of consistent APIs that can run and work anywhere, she said makes it easier for the developer to create desired services. “The system needs to be smart enough to make decisions on what should run where,” O’Donovan said.
For example, Starbucks runs multiple IoT devices, she said, such as coffee dispensers connected to Microsoft Azure, which serves as the local compute in retail locations. This creates an opportunity for Starbucks to build a service around the technology. “Some of it runs in the store. Some of it runs on the network edge. Some of it runs in the region,” O’Donovan said. “What you don’t want to do is make it so complicated for the developer to run that app that they get overwhelmed or need special skills.”
-
About the Author(s)
You May Also Like