PARC CEO, Experts Discuss Digital Transformation
At Gartner Symposium ITxpo 2016, the CEO of PARC brought three experts on stage to talk about digital transformation. InformationWeek found time after the session to go deeper into the subject.
![](https://eu-images.contentstack.com/v3/assets/blt69509c9116440be8/blt8db14589e582aa9e/64cb3d3768fcd5ef597cc305/parc_signage.jpg?width=700&auto=webp&quality=80&disable=upscale)
Xerox PARC (now known as "PARC, a Xerox Company") has a long and storied history in the computer industry. Known as the research center from which any number of innovations sprang, PARC still has a reputation as one of the places where pure research takes place on projects that might not have a direct impact on the products we use for years, or even decades.
At this year's Gartner Symposium ITxpo, PARC CEO Steve Hoover led a panel on digital transformation. Joining him on the panel were Victoria Bellotti, a research fellow at PARC; Gillis J. Jonk, strategy consultant and business innovator at A.T. Kearney; and Gytis Barzdukas, head of product management at Predix, part of GE Digital.
They spoke to a room full of IT pros and corporate executives about some of the most fundamental changes affecting their businesses today.
[See Gartner's 10 Tech Predictions That Will Change IT.]
In the session, the four discussed topics like what "digital natives" expect as customers from natively digital businesses. Those expectations include such things as apps and products that are:
Mobile-first and always connected
Intelligent and personalized
Real-time and on-demand
They also talked about what companies have to do in order to reach these customers. The "to do" list includes:
Focus on outcomes, not merely products
Optimize existing businesses and invest in new options
Experiment and learn
After the well-attended session, I had a chance to get together with the four in one of the hallways at the Walt Disney Swan hotel, where the panel happened. I asked about some of the issues that had occurred to me while listening to the presentation, beginning with one of the most basic questions on terminology.
Read our hallway interview, edited for clarity and space. It's not a short one, filled with quick sound bites. These are people who have spent a great deal of time thinking about the topic. They're eager to engage with the industry on the issues around transformation.
I'm eager to know what you think about the whole idea of "digital transformation." Is it just another buzzword that we'll forget in a couple of years, or is it the label for something serious that's taking place in the business world?
Steve Hoover: I think a better phrase and one we used in our talk is "natively digital." What people are trying to capture with the phrase "digital transformation" is: We use this word called digitization for a while, and what it captures well is the idea of digitizing an existing way of doing things. And the whole idea of what people are getting at with the phrase digital transformation, and what we're trying to get at with natively digital, is that it's not about digitizing what you do today. It's about totally re-imaging what you do and how you do it in a way that leverages digital from the ground up.
Gillis Jonk: The thing is, when you say digital transformation in this way, it suggests that you need more digital. You need transformation to get this "more digital" into your firm. I think maybe we are moving into a situation where it just becomes business as normal, and you could argue that you do transformation with all the means available to you, including digital. But it doesn't restrict itself to digital. I would argue that a growing number of companies will just start calling it "transformation" again, including digital.
Steve Hoover
(Image courtesy of PARC)
With all the digital transformation, there's a great deal of discussion about a couple of things. One is the instrumentation and control of the non-digital universe. (The sensors, the IoT -- and acting on it.) To what extent do you think that we're properly instrumented and improperly "intelligenced"? Is the IoT over-sensored and under-smart, or do we need more sensors to really reach our potential?
Gytis Barzdukas: I would argue that we're not over-sensored or over-instrumented yet -- that there's still a lot of opportunity to apply sensors and to track both human behavior and machine behavior. I do think that we're not taking advantage of the information that we have captured. We're behind the curve as far as that goes. The ability to actually take advantage of the data that we have, or have access to, -- we haven't even reached the beginning of that. And to look for insights in that information, I still think there's a huge opportunity.
So while I would say we're not fully instrumented, yet, I think we're lagging even more in taking advantage of the information we have.
Victoria Bellotti: I do a lot of my work with people who are working in machine learning and analytics to develop technologies that [themselves] develop inference about context. We call it context-aware computing. And I agree that there's a lot more sensing to be done. There are huge opportunities to automate and anticipate what people are doing. We can make algorithms that can essentially learn what people are doing and then anticipate in advance what they do.
You can sense patterns in weather or traffic and then see how [they affect] what people do. People are creatures of habit, so you can anticipate that, when it rains, Wendy takes the bus instead of driving. We can infer, though we don't know, that it's because Wendy doesn't like driving in the rain. So, there's a lot of possibilities here for providing services based on anticipating what people are doing.
I think there's a really big area of concern, though, about what is appropriate to capture; you know, how far is too far? I would love technologies that would anticipate my every need, but I'm also very concerned about abuse of access to that data, or my privacy is intruded upon by adverts that anticipate that I'm hungry and suddenly I'm seeing 50 different things that are interfering with what I'm trying to do.
There's a huge area of research that remains for thinking about the user experience of this highly sensed, inferential, analytical world. And also a lot of concerns from the legislative side where the law is kind of bumbling along trying to keep up with the potential of the technology. And I think that will continue and just get worse.
Gytis Barzdukas
Everyone in this group and most people here at the conference are predisposed to think of the increased digitalization as a good thing. Not everyone sees it that way. The morning keynote began on a dark stage. I don't want to ask if the anxiety is well placed, but is it rational?
Victoria Bellotti: I'm going to say, I think "yeah." As a social scientist, I know that when we say that machines are smart or intelligent, what we actually mean is "amazingly dumb." And so the inferences that machines make are often the wrong ones. And so people are right to be a little suspicious, they're often suspicious in the wrong way, though.
They think things are happening that don't happen and they don't think about the things that are actually happening, which are, sort of, mistakes. This happens whether we're talking about users or designers.
I want to give you a good example. I was just in a session where a speaker said that systems are going to become conversational. He made as if this was the big trend. I think that it's certainly a trend. But conversational systems, we tend to think in terms of a system that can speak that has intelligence, but actually it's a brainless, dumb mimic of something that has intelligence. And therefore, it does brainless, dumb things.
For example if you're someone who notices that someone has a lot of high tech in their house, it's a good idea to wait until they go to work, stand outside the house, and say, "Alexa! Let me in!" That's what's happening now; Alexa pretends to be smart but makes really dumb decisions.
So, people are right to be worried. They're often not worried about the right things, but it's very good to ask questions about what can happen with these technologies, especially when they get into situations we're not anticipating.
Gytis Barzdukas: I don't know that I have a counter-point -- I think I agree. I think the one thing I would say is that change [is] always threatening to people. I think you develop new norms as a society around those changes. And people will prevent certain types of change if it becomes outside their norms and outside what they're comfortable with, and then they'll accept the ones that benefit them.
I think we'll see an evolution here but I think people are always a little concerned about the future, concerned about the advent of the technology, and we can look at history and see where there are many examples of that. Society adapts to it, but if it goes too far outside the bounds of what makes people comfortable it will be prevented.
Victoria Bellotti
Victoria Bellotti: People place their trust in technologies, sometimes too easily. You can see evidence of this on YouTube with the videos titled "My Tesla tried to kill me." This is very early, experimental, autonomous vehicle technology that seems to do alright most of the time, and so people who have been told by Tesla, "We don't guarantee this, folks, you need to be monitoring the system," -- this is a very bad decision by Tesla, by the way, because the moment you take a job away from the user, the user is no longer paying attention and will fall asleep -- and so the car is being given 100% of the decisions rather than 99.9% of the decisions.
People will trust technology for the wrong reasons. Often they don't understand what's going on, they're suspicious, but they get it wrong so designers really need to think through how users are going to interact with this technology, how are they going to break it, how are they going to do the things you told them not to do. They don't care what you say, what the technology allows them to do, they'll do it, if it's good or bad.
Gillis Jonk: I think there's some apprehension around the fact that technology is replacing jobs. It's like a battle, where technology is being used to make things more efficient and technology is being used to make new stuff to do. Now, I think the efficiency front is winning. I don't think we see a huge growth in jobs coming from technology, but I think it's inevitable that sooner or later we get fed up with efficiency and start looking for new stuff we can do with technology.
And we're starting to use technology more and more for every part of our life. Most of us book our own travel. We do our own banking. We spend -- what? -- 30 minutes a day, on average, on Facebook. If you add up the time of all the people on Facebook every day, it's the equivalent of about 250,000 FTEs for Facebook. If you think about it, those 250,000 people are creating advertising opportunities that Facebook monetizes. But [they're] 250,000 volunteers. They don't get paid.
So, you've taken a good chunk of potential employees out of the economy because they're entertaining themselves and each other. Now, it does equate to quality of life. But at the same time, you're not innovating, or you're not "entrepreneuring," you're not creating a new business, because you're creating advertising opportunities for Facebook.
That dynamic, we're just starting to understand it, but at some point we'll have to stop creating those advertising opportunities and start "entrepreneuring" again, start turning it into opportunities for ourselves and for the economy.
Gillis Jonk
Victoria Bellotti: Something a little disquieting... You know, there's no one here who's really at the wheel when it comes to policy or strategy for how technology's emerging properties affect society at large. As a social scientist, I spend some time talking to people about their addiction to Facebook. It's out of control. They basically say, "I'm spending way too much time on Facebook."
I, myself, spend a lot of time listening to audio books -- and they're aware of this. They advertise them knowing that some people can't stop listening. The reward circuit in the brain is getting so much from a really good book that you don't even have your eyes open. You can lose sleep over it, right? This is impacting the world.
So, is there anything we can do about how these technologies are rewarding us so much that we're stuck in an engagement loop? There was a book recently. [Nir] Eyal wrote a book called Hooked. The book is deliberately teaching people how to hook people into continuing to be addicted into continuing to use the system. What are the consequences when designers are deliberately doing that?
And they're all competing with one another to try to hook the users' attention. You've got this increasing competition for attention and now it's not attention by choice, it's attention by addiction. Where's that going to take us?
Steve Hoover: On this question of whether people are right to be concerned -- of course they are, right? These are profound changes. Internet addiction is a new experience we have to pay attention to. The fact that, in the end, these AIs are tools, and people will use tools.
How many of us have tried to pound a nail in with a pair of pliers? I mean, I've done it when I don't have the hammer. And, you know, you better design a pair of pliers to be able to handle that. I think it is very valid to be concerned. I think the bigger question is, "What do we do with that concern?"
That's, to me, the question. For those who say the response is to stick my head in the sand, or argue for escaping, that's just not practical. Victoria highlighted the idea of designers accepting responsibility for that and understanding it. I think that's very valid and I think that there are new classes of technology, actually.
A technologist's answer to technology is always more technology. That's the risk. But what I mean by this is that we have examples of things like the "flash crash." We don't know what the systems are going to do. Designing systems to isolate the propagation of problems as things become more connected is really important. That is an area of technological study that is not getting enough study in my mind.
There are principals to design systems so there can be an observer in the system watching what is occurring and then shutting down or changing the behavior. Victoria talked about the self-driving cars that allow the driver to lose attention. What they're doing is sensing if you're losing focus and turning off the system.
I think the new system with the Tesla is that if it has to remind you three times to put your hands back on the wheel it will turn off autonomous driving for 30 minutes. That's a self-aware resilient system design. As an area of technology, it's like we created human-computer interaction. It's like we created systems that were easy to use, and now we have to create resilient systems that can avoid the massive problems with more autonomy.
Steve Hoover
At the end of the interview, I asked whether there were any areas that I hadn't covered -- anything the group felt was important, but that I hadn't given them the chance to talk about.
Victoria Bellotti: Something I'm really interested in these days is creating systems that help people to help people. Technology is always thought of as the solution and not the enabler of a human solution. Too often we think of technology being the product, but it could be that the product is a relationship.
We've talked about experience rather than product. I've heard several people today say, "Focus on the experience, not the product." I'm very interested in research in that space. I'm interested in how technology can help people behave differently toward one another. Instead of being an autonomous, smart (which they're not) thing that does it for you, it's a support. It's an amplification. It's a sort of a tool, or a guide, or a nudger.
Dan Ariely gave the keynote and talked about behavioral economics, about how people can be given choice architectures. So machines can do this very well, they can make people aware of their blind spots. "You haven't asked this person how they feel, today." There are email systems that do that, say, "Hey, you used profane language, you might want to think before you send that email."
Machines are really good at those checks and balances, but they're not very good at the sort of intuitive things that require the wet hardware that we have, that is very good at reading people. People solve the problems and technologies should be built to help them do that more. That's a focus I don't see very much here.
Victoria Bellotti
Steve Hoover: I really like Victoria's point, and I would amplify it from another angle. I think there is a lot of concern about computers replacing people. There are technologists who believe we're at an end state -- that artificial intelligence will replace people.
I think that's (a) the wrong goal and (b) actually not likely to be true for a while longer. But I think we're entering the stage where computers can be my partner, in a different way. The idea [is] that Alexa is not really intelligent in the way a person is -- and if you assume that, you're going to get into trouble.
On the other hand, it's becoming much more of an equal. Software used to be tools -- a tool that I used to get a job done. I think we're in a stage where computers are able to start becoming our partners.
Thinking about it that way -- instead of how it's going to replace me, to [its becoming] a partner to help me get my job done -- changes how you think about designing that computer system. [That's] because you recognize that the human still has a predominant role, and the computer is their aid and partner. That's a different viewpoint, which will bring more value in the long run and is more accurate about the stage that we're actually entering.
The conversation that started with a rather dark vision and ended with a much brighter view. In the future, we'll bring you one-on-one conversations with Steve Hoover, Victoria Bellotti, and other experts in the field who can help us understand what the whole conversation on "digital transformation" is all about, and about the critical roles that IT staff and professionals play in achieving it.
Steve Hoover
Steve Hoover: I really like Victoria's point, and I would amplify it from another angle. I think there is a lot of concern about computers replacing people. There are technologists who believe we're at an end state -- that artificial intelligence will replace people.
I think that's (a) the wrong goal and (b) actually not likely to be true for a while longer. But I think we're entering the stage where computers can be my partner, in a different way. The idea [is] that Alexa is not really intelligent in the way a person is -- and if you assume that, you're going to get into trouble.
On the other hand, it's becoming much more of an equal. Software used to be tools -- a tool that I used to get a job done. I think we're in a stage where computers are able to start becoming our partners.
Thinking about it that way -- instead of how it's going to replace me, to [its becoming] a partner to help me get my job done -- changes how you think about designing that computer system. [That's] because you recognize that the human still has a predominant role, and the computer is their aid and partner. That's a different viewpoint, which will bring more value in the long run and is more accurate about the stage that we're actually entering.
The conversation that started with a rather dark vision and ended with a much brighter view. In the future, we'll bring you one-on-one conversations with Steve Hoover, Victoria Bellotti, and other experts in the field who can help us understand what the whole conversation on "digital transformation" is all about, and about the critical roles that IT staff and professionals play in achieving it.
Steve Hoover
-
About the Author(s)
You May Also Like