In an interview for Inc. magazine, former venture capitalist and Reddit CEO Ellen Pao said, “We're only starting to find out what can happen to our data on the big tech platforms, and how little control we've had over it…” She is more than just right; she’s not alone in that sentiment.
Discovering what we value online and how to protect it has received a lot of attention lately, from consumers to the media. The Fyre Festival fiasco, examined within two documentaries on Netflix and Hulu, highlighted the role social media influencers and fame played in duping attendees of a poorly executed festival. The scam joins a list of public events that have stirred public discussions on what is authentic online.
Business managers must keep an eye out for how social engineering - deception that manipulates people into divulging confidential or personal information that may be used for fraudulent purposes - can erode credibility with analytics.
The ways to detect fraudulent activity have existed since the early days of the Web. In fact, even the medium on which they are applied really has not changed dramatically. The difference now is that the purposes behind software have changed, requiring a different order of fraud prevention tactics that guard against social engineering.
For an analogy, imagine the need for warming up a car. That need made sense when vehicles had carburetors – running the car allowed a richer gas-air mixture to compensate for the cold. Every car now has fuel injection with sensors that automatically adjust that mixture for 40 seconds then tune back. But some people still warm a car, even tempted by remote autostart to warm the interior. Such gaps in adopting new information or techniques is how a social engineered attempt seeks to take advantage.
Technologists are discovering how supporting software tech is being treated like the car warming analogy. For example, look at cookies, text files that contain strings of data associated with browser activity. Cookies were originally used for storing supporting data for websites that people use frequently, then passing the data to server-side code.
Misuse of programming aimed at cookies further altered usage. Inundated with unwanted ads, consumers sought to disable cookies at the browser. This altered the capability of cookies to provide accurate guidance to analytic metrics. Consumer behavior and intrusive ads had rendered cookies into the browser’s “carburetor”.
Advanced measurement is also susceptible to fake metrics. Bot-related traffic can impact training datasets for machine learning if Internet traffic is a part of that given training dataset. A methodology for identifying bots in referral traffic is essential.
So what can mangers do to minimize the threat of fake metrics?
Fighting back starts to develop communication to understand what gets baked into assumptions. Far too often executives and managers seek just "the facts" from charts without questioning how the metrics behind those facts came together. Managers should focus more on identifying when outdated assumptions occur or potential liabilities from mismanaged data.
Those discussions can also lead to stronger demand for third parties to be more transparent on data sources and how data quality is maintained. Bots can invade traffic source, but not all bots are bad. Some are dependencies from applications interacting with other software, used for repeated feature functions or providing support information. But bad bots crawling to steal website content and images can appear in Internet traffic sources. Identifying anomalies will involve regular questions when reporting.
That communication will also involve implementing technical tactics. Marketers can learn to deploy a packet sniffer to inspect Https requests and reveal calls to the website host server, allowing analysis for suspicious web-traffic activity. In addition, a reverse proxy positions a proxy server between Internet traffic and the server receiving Internet calls to form a protective firewall.
Incorporating facts about device functionality can also help verify if measured digital activity is from a real person. A small vibration of the screen occurs whenever a person clicks on a device screen. A combination of GPS movement and the power supply could help classify whether or not it is a human or machine interaction.
A less technical route, profile audits, can eliminate fake social media followers. Fake accounts typically display excessive clickbait calls in the profile description, and show posts asking the follower to “Click here for…” without any signs of deeper conversion. It should be noted that while these signals have traditionally signaled a range of bad actors circumventing community rules – from adult material to unrefined “get rich quick” schemers – the signals can appear for less hideous reasons. No matter the cause, removing bad profiles is essential to removing social media-caused anomalies. Clean activity is important to analytics reporting, even more so for machine learning models when social media is a training dataset parameter.
Metrics will never be 100% perfect. Identifying genuine human activity associated with your website, app, chatbots, and data will remain a systematic struggle for the years ahead. But imagining your protection against social engineering tactics can establish the right starting points for protecting your digital marketing investment.