Former Microsoft CIO Jim DuBois Dishes On AI and Future of IT
The industry veteran and author looks beyond hype around generative language models as businesses begin adopting emerging technologies at a feverish pace.
Whether its adopting AI or shoring up environment, sustainability, and governance (ESG) practices, Jim DuBois sees an opportunity to look at changes not as chores or responsibilities, but as real value propositions for enterprise organizations.
DuBois served as CIO of Microsoft from 2013 until 2017 and oversaw the tech behemoth’s global security, infrastructure, IT messaging, and business applications. Since stepping down from that role, Dubois has become an author, speaker, and consultant. The tech boom (and bust) spurred by the pandemic has created plenty for DuBois to ponder as information technology continues marching through turbulent times.
He took some time to chat with InformationWeek about a wide range of issues impacting the IT industry and CIO roles. The following is an edited partial transcript:
We wanted to get your thoughts on emerging tech. AI is at the top of many minds right now as generative language models grab headlines daily. How can CIOs keep up with the pace of innovation that’s happening, and should they worry about the ethical implications surround AI use in enterprises?
There’s a lot of hype about this … I think as long as we use AI over our data and have some controls, there won’t be a lot of ethics issues. Microsoft, for example, has been using [AI] like a co-pilot. And it will help enterprises significantly. There’s a lot of hype about the possibility of AI replacing humans in jobs. But I think it’s more likely to take the drudgery out of jobs and really enhance what we’re able to do with increased productivity.
Do you think that there’s a need for regulation for artificial intelligence applications? Is that even possible with our current political landscape?
Yes, I think that there is a need for regulation. The simple example that I always use is … in self-driving cars, if you have a situation where the car must decide between not hitting a pedestrian or crashing the car and possibly killing the driver, then whoever is writing the programming has to make that choice. Obviously, the pedestrian should be protected. But nobody is going to buy a car that’s going to pick, “Kill the driver.” So you have to have some kind of regulation for things like that. But you also have to be careful not to be hypersensitive and over-regulate. You see some of the people in government, when they interview people like [Meta CEO Mark] Zuckerberg [during a 2018 US Senate hearing on digital privacy], the questions that they’re asking -- they clearly have no idea what it is that they’re talking about. And if you’re going to have people like that make the rules, we’re not going to end up helping.
Every major tech company -- with the glaring exception of Apple -- seems to be releasing some new generative language AI product. Google, Microsoft, and Amazon have all announced major generative AI initiatives. Why do you think Apple is sitting on the sidelines with generative AI?
I think Apple doesn’t see an enhancement to their product yet. Google and Microsoft have search -- and that fits with the generative AI piece. Apple leverages one or the other of those products -- they don’t have their own search engine, so they don’t have the massive amount of data to use for a generative AI model yet. Apple is also much more planful about their product design. Microsoft and Google are much more experimental. Apple is less about doing experiments. They are more about being deliberate and being the best. They are not going to just jump on the latest and greatest thing.
Let’s talk about “future of work.” The term feels like a misnomer at this point. Future of work seems to be the “present of work.” Hybrid and remote work are both here. How do you see the hybrid and remote landscape evolving and changing?
We’ve got to recognize that only some jobs are capable of being hybrid or remote -- there are a lot of jobs that just aren’t. And there are a lot of issues with employees over that point. One of my daughters used to be the product manager for the loyalty program at a large department store and worked as part of the online team. Coming out of the pandemic, the store initially made a policy that all of the online team could continue to work remote. But all the people who worked in the stores had to come back and there was a huge revolt within the company of the people that had to come in the stores feeling like they were second class compared to the online team. That is something that employers are going to have to deal with and have employees in both camps.
One of the things we have to figure out in the future of work is that a huge part of the population isn’t able to take advantage of this hybrid and remote opportunity. And what do we do for them? Do we end up getting to a place where people are picking jobs based on whether they can work remote or not? And are we going to have to compensate people differently for being on- or off site? That’s something that hasn’t been solved … There are a lot of companies that haven’t figured out how to keep the collaboration and the culture going in a remote workforce. So they just said, “Oh, we’ve got to get people back into the office do that.” I would say, “Or, you could figure out how to collaborate and keep your culture going with remote.”
Without the pandemic, would we have jumped headfirst into hybrid and remote work like this?
We never would have gotten here as fast. I always say, never waste a crisis. And the pandemic was a crisis that made us figure out stuff and made us do things that we would otherwise have tried to be cautious. It would have taken decades to get the same progress that we made in the last couple of years. For as horrible as the pandemic was, it had a bunch of huge benefits for tech.
Given that so much has changed over the last few years, how do you see the next decade playing out in the overall IT landscape?
I think a lot of the drudgery part of jobs will go away quicker than people realize -- although, we will fumble and make mistakes with AI and try to figure out regulation. And we’ll keep trying. We’re going to get to a place where the landscape looks dramatically different in 10 years just on jobs and how people do things -- because of AI. We’re not going to have this big outsourcing to call centers. We’re still going to need people who are good at that, but they are going to have help from AI. AI is going to be the co-pilot.
How can we continue making strides in DEI and ESG initiatives?
I’m a believer in carrot rather than stick incentives. Rather than compliance requirements, we need to focus on the fact that there’s so much value in ESG and in having a more diverse team. We need to focus more on the incentives and less on the “because we told you to” part. And I think a lot of the biggest advocates tend to want to use a stick approach to making those initiatives work. I’m afraid that everyone is just going to try to do the minimum to hit requirements. So let’s stop trying to make this something that we enforce. Instead, let’s make is something we do because there’s so many benefits to the environment, to society and to the company itself.
What to Read Next:
Why Sustainability is a Risk Management Issue for CIOs
Can Artificial Intelligence Ever Become an IT Leader?
Cisco CIO Fletcher Previn on the Hybrid Workplace & Exploring AI
About the Author
You May Also Like
2024 InformationWeek US IT Salary Report
May 29, 20242022 State of ITOps and SecOps
Jun 21, 2022