How Bloomberg Designed a Layered Zero-Trust Philosophy

Bloomberg’s multifaceted business requires a layered approach to data management and security, a company executive tells InformationWeek.

Shane Snider, Senior Writer, InformationWeek

February 1, 2024

6 Min Read
A man looks at his iPhone which displays the Bloomberg logo, while sitting at his computer desk
M4OS Photos / Alamy Stock

Nailing down a definition of “zero trust” within any organization is a huge challenge. Within the sprawling operations of one of the world’s largest financial, software, data, and media companies, a definition is not just a challenge, it’s crucial.

That’s what Phil Vachon, Bloomberg’s head of infrastructure in the office of the CTO, tells InformationWeek in an interview. Forbes lists Bloomberg as 33rd on the list of largest private companies in the US, with more than 19,000 employees around the world and annual revenue of $12.5 billion.

With such a sizable headcount and data flowing through hundreds of journalists and analysts daily, the company is faced with the herculean task of securing its own data and making sure the right people have access to company systems.

Vachon chatted with InformationWeek about the complexity of the company’s security operation and how it tailored a multi-layered zero-trust philosophy to meet its needs.

[Editor's note: Quotes have been edited for clarity]

Zero Trust sort of has a fluid definition throughout the industry. What is the definition of zero trust for Bloomberg and how has the organization developed the philosophy?

When a vendor or anyone says to me, "I’m selling you a zero-trust solution," I say, "I need an answer as to what that means to you at that point in time." Zero trust is actually a design philosophy. It’s a philosophy about how you build and architect systems to be secure from the ground up. It’s an infrastructure first approach and that means you have to think more cohesively and coherently about who people are -- that identity concept needs to be baked in fundamentally. What are people or what should people or services or systems be allowed to do. The moniker we use is the principle of least privilege. You should only be privileged just enough to do your job and nothing more. So that might sound like some paranoid spy nonsense, but its about a mindset of, "Hey, would I be comfortable putting this system, this service, this functionality out on the open internet." And if not, why?

Related:Top US Gov’t CISO Details Zero-Trust Strategy Race

We want that design practice to comply with everything we do. At Bloomberg, we think about he world in terms of having that fine-grained access control, having identity, a strong sense of identity for services and systems and users baked into that and we make sure that only the users or systems that are authorized to access data can actually access that data. We design our systems to be very robust and resilient -- so that if an attacker does get into our system, they wouldn’t get very far.

Related:Zero-Trust Networks: Implementation Is No Walk in the Park

Historically, everyone built that tough, hard outer shell around the enterprise, and then people can move freely once they’re inside. We’re saying that we want to build that hard shell around everything from a database through to a developer laptop to the services that deliver key market insights to a client.

Tell us a little bit about the challenges of building a zero-trust philosophy with so many corporate IT users.

It’s a big coordination problem. The good news is humans as a whole have become very used to entering a username and password and providing a second factor as part of how they start the day. So, bringing that concept of identity, of who is the person at the endpoint into our backend systems into the various services people need to do their jobs becomes and exercise in picking out the right standards-based methodologies. For example, expressing authentication, making sure it’s as transparent as possible to the users that this is happening. You don’t want to have the user get exhausted typing in their password, dozens of times per day. Instead, you want them to enter it once a day, and make sure that information is conveyed to everything as they communicate within our infrastructure.

Related:Why Your Zero Trust Network Access Solution is Too Trusting

It's a coordination problem because, in the end, that identity you have on your laptop might be very different in terms of what would need to be expressed for some third-party service, or what we use to authenticate to a database. So, there are a lot of layers of making sure we can map one to the other, making sure we have a good understanding of what the policy is … there’s a lot of work to make sure we have what we call “policy intent,” or what you should be allowed to do.

And you feel like that layering is part of the culture at Bloomberg, throughout the organization?

We’ve invested heavily in workflows and technologies to allow us to have those fine-grained controls. It’s not just about one person being allowed to access a database. It’s about why should that person be allowed to access that database. Maybe the role requires the ability to view data or update data or whatever it may be. We want to make sure to understand all of the workflows and to keep track of the organizational dynamics that change whether or not that one person should have those privileges. So, we’ve invested very heavily not just in making sure it’s easy to make those decisions, but also making sure that we have the right workflows on top to track who you are, where you work, what your role is, and then make sure that those privileges don’t follow you as you move into different roles.

ChatGPT has obviously changed everything and accelerated AI adoption throughout enterprise. Are you starting to take advantage of the tools that are available?

Obviously, there’s been some fantastic work that’s been done internally around large language models. For me, it’s actually very exciting. And Bloomberg was one of the very first to publish a very detailed overview of how to create a model that’s tailored to an application like finance. And we were one of the first to actually get out a recipe for how you design such a model. So, it’s very exciting to build on that. AI is going to be core to our strategy and how we’re ensuring that our massive troves of data and that the right data is being put in front of our customers -- to make it easy for them to find that data and find functionality in our products and to get the right answers even faster than they were able to before. Then, we have to pivot back to the security side of things. And one of the things we’re really interested in is how do we augment security professionals doing their jobs. How do we get to a point where we can answer any sort of questions about our infrastructure, our configuration systems -- to be a co-pilot for incident responders or people investigating security risks, or even just enabling developers to make the right decisions. We’ve been doing AI at Bloomberg since 2009 and in that time, I think we’ve come a long way. Adopting some of the GPT models should enable us to build innovations to accelerate various aspects of our business.

About the Author

Shane Snider

Senior Writer, InformationWeek

Shane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights