Who is Colin Chapman? And what can we Learn from Him?
Anthony Colin Chapman was a design engineer, inventor, and founder of Lotus Cars; a British manufacturer of sports and racing cars known for their exceptional handling and lightweight. In a nutshell, Chapman's design philosophy is "simplify, then add lightness." In practical terms, this approach ensured his cars were fast -- not only on the straights, but particularly in the corners. In fact, between 1962 and 1978, Team Lotus won seven Formula One Constructor titles, six Driver's Championships, and the Indianapolis 500. Based on results, it seems Colin may have been onto something.
So what exactly does Colin Chapman's philosophy on lightness and speed have to do with cloud security, big data, and IT architecture? To find out, let's focus on your current state data architecture strategy.
Current State Data Architecture and Security
Let's pick a data category that is (according to a white paper from the Ponemon Institute) a consistent source of security concern to enterprises worldwide: customer data.
If I were to audit your enterprise data architecture, how many instances of customers' data records would I find? One? Three? Six? Maybe more? Would you be surprised if I told you that as an IT auditor (before I worked for Intel), I would routinely find at least six distinct data records on the same customer in a typical U.S.-based company?
If you continued auditing these islands of customer data and compared them at the record level, what do you think you might find? Would the information be consistent in content, use, and ownership? As it turns out, the chances of that happening are not very likely.
As an auditor, I found approximately 75% parity among these records. What about the status of the other 25 or so percent of the information? Would it reflect the nuances of whatever department owned and maintained the record? What would the value of these nuances be to a competitor, or to a bad guy who hacked the record? What would you suspect Colin Chapman might say about the impact on enterprise performance of the (debatably) needless weight of all this duplicate data?
With all this in mind, let's move the discussion to big data and the cloud.
First, it's important to frame big data as simply as possible (understanding that it means many things to many people). Basically, big data is about harnessing the power of analytics to mine useful business intelligence (BI) out of massive amounts of ostensibly non-related information. The amount of information is so massive that a typical enterprise data center doesn't have the capacity to conduct this analysis inside its firewall.
If this definition were fairly accurate, then information about your customers would likely be of value to your BI effort.
Now the question becomes exactly which of the theoretical six customer data records would you use in the cloud as part of your big data strategy? Would you default to using the one record that reflects the greatest percentage of commonality among the six? Or would you simply continue to utilize all the records in the cloud?
Due to history, effort, potential organizational ownership confrontations, and related difficulties, most companies would take the easiest route and place all six-customer records in the cloud. (If your company has done something else, please let me know. I'd love to be proven wrong.)
Structural Security Implications
In Sun-Tzu's The Art of War¹, the author speaks of five types of incendiary attacks. The first is to incinerate men, the second is to incinerate provisions, the third is to incinerate supply trains, the fourth is to incinerate armories, and the fifth is to incinerate formations. Let's explore this premise using customer records.
Consider that each customer record represents a standalone formation of data provisions, all being exchanged via very long, and very exposed supply lines. Where one record has its related security concerns, six records, containing fundamentally the same data (at least 75%) have more.
I suggested in an earlier blog that the bad guys seem to be much better at adapting to (and taking advantage of) structural security weaknesses than we are at defending them. So not only does the extra weight of these records impact performance (a la Colin Chapman). It also gives the bad guys a target-rich environment that's easier to breach. Since continuing to move toward the cloud and big data is inevitable, what actions do we need to start -- or should we already have in place -- to prepare for what I call future state security?
In my next blog post, I'll begin to define what a future state security framework (and its funding) should look like and offer suggestions on how roles and responsibilities must evolve as the boundaries of your organization expand.
As always, I'm interested in your feedback to learn how your organization is selecting data to include in your big data strategy. I'd also like to know if you're planning on managing your data architecture differently than you did when it only existed inside your firewall. To join the conversation, please contact me through Twitter.
Bob Deutsche provides business and technical advisory services as well as thought leadership to mid-and-senior level executives in the Global 50 and public sector. With 3- years of experience in industry, Bob's background includes centralized and LOB IT organizations, data center operations, software development and CIO positions. Bob is a retired Lt. Colonel in the U.S. Air Force and holds a Master's of Science in Systems Management from the University of Southern California, Viterbi School of Engineering.
The above insights were provided to InformationWeek by Intel Corporation as part of a sponsored content program. The information and opinions expressed in this content are those of Intel Corporation and its partners and not InformationWeek or its parent, UBM TechWeb.
¹ Sun-Tzu, The Art of War, Translated by Ralph D. Sawyer, Fall River Press, 1994, p. 227.