Living in the internet age, it is incredibly easy to take for granted the great magnitude of how this digital world has touched every aspect of daily life. Not that this is surprising, after all, what choice do people have? Not to engage with the internet means being left in the dust of a bygone era. Yet there is something mythical about its nature; it is communicated with by a relatively scant crowd of technological oracles, yet its impact is of a scale that would make even the Olympian pantheon jealous.
The internet has become a de facto facet of life, particularly for millennials and those younger who have grown up with it. For us, the thought of living without being online is not even ghoulish, it is downright inconceivable. For those who trekked the world of letter writing and catalog shopping, this thought of returning may be quaint but not likely ideal. So this leaves us with an infrastructure that, for the most part at least, all people depend upon…this is where the situation starts to get messy.
Last week, we wrote a post about our research into some major personal data breaches. These examples (in addition to many others) have been a wake-up call to the masses who have, for the most part, been unknowingly uploaded into the untamed jungle of online data. Here, the law is almost entirely in the hands of the disparate tribes that make it up. And according to a HuffPost survey around users’ trust of Facebook, it is clear people are losing faith in these tech tribes to properly regulate themselves and be responsible with data.
Therefore it is not surprising that in the new Congress, representatives in both chambers are moving on this issue. In the past month, there have been House and Senate hearings around consumer data privacy. In the House, this hearing involved five witnesses testifying before the Subcommittee on Consumer Protection and Commerce, whose parent committee has been hearing the likes of Equifax, Facebook, Twitter, and Google in high profile testimonies in just the last year.
The panelists included Ms. Collins-Dexter, an anti-data-discrimination advocate from Color of Change, Professor Layton of Europe studying international policy (and specifically GDPR), two industry representatives—Ms. Zheng for the Business Roundtable and Mr. Grimaldi for digital advertisers—and finally Ms. O’Connor from the Center for Democracy in Data. Their testimonies and the hearing recording can be found on the subcommittee’s site.
So, why do we care about all this hubbub at MetaRouter? One of our chief concerns is around providing a platform that has air-tight security, and this entails staying up to date with compliance standards as well as contributing to the dialogue around the civilization of the data jungle. After all, providing companies with the ability to control and protect data was the key inspiration behind our Enterprise Edition.
So we've five truths about data privacy that we believe will be pillars of the narrative to come.
Some of these are fairly America-specific, however, they all have a pervasive impact as the standard set by one of the world’s greatest powerhouses—home base for Google, Facebook, and other data giants—could define the future of how we approach data, the internet, and what it means to have privacy in an intimately connected world.
Number One: There is a need for action on discriminatory data practices
As a disclaimer, this takeaway will involve some opinionated vision on my part and does not necessarily reflect the voice of MetaRouter. Data security is nothing short of a complex topic with many perspectives, my own included. If the reader would indulge the potential for ideological difference, this point can setup what ultimately is meant to be a framework for discussing what we believe to be at the core of the online future.
According to Ms. Collins-Dexter, there have been some serious breaches of ethics around data collection and application. She comes from an organization called Color of Change, which advocates for racial justice on many accounts. We are going to do our best to avoid the politics of the situation, but this does raise an important counter-mindset to what most of the other panelists bring, which tends to focus on either the commercial effect or a more generalized sense of responsibility around data practice.
In fact, there is a substantial opinion that believes that these hearings do not go far enough to include perspectives outside of the standard industry talking heads. No doubt this is a complex issue that has many valid and often competing arguments, however, a sustainable solution is going to be one that addresses the equitable treatment of people’s data.
Ms. Collins-Dexter supplies some shocking examples of how personally identifiable information can be used to target, for instance, Hispanic minority groups who may be more susceptible to high-interest loans. If this kind of practice is not egregious enough as it is, there also are racial epithets that are often tied to these practices. The example given is using the tag “easy dinero” to label these expected easier targets. Fortunately, practices akin to this were unanimously condemned by the panel, albeit more strongly by certain members.
Where the waters get murkier and perhaps more sinister with this subject was on a topic that seemed to be swept aside for most of the hearing. Ms. Collins-Dexter brings up that there is a practice of using non-sensitive telemetry data in order to inflate insurance rates and target specific property to low-income communities, particularly those with a high minority population. Let’s break down what that means. Essentially, telemetry data is non-identifying information that a smart car might send: driving speed, time of day of travel, distance of travel, etc. Then, this data can be used to infer a profile for people that can be used for targeting. This practice specifically disproportionally affects urban, African American communities according to Ms. Collins-Dexter, and she is not alone—there is even an FTC report covering the severity of this issue:
“At its worst, big data can reinforce—and perhaps even amplify—existing disparities, partly because predictive technologies tend to recycle existing patterns instead of creating new openings… For instance, some data suggests that those who live close to their workplaces are likely to maintain their employment for longer. If companies decided to take that into account when hiring, it could be accidentally discriminatory because of the racialized makeup of some neighborhoods.”
As the conversation around the future of the data-scape progresses, it is clear that all of us—industry, advocates, politicians, journalists, and so on—need to treat this issue with the nuance and care it deserves. The sustainable solution is going to be one that is smart for commerce, but it also must be a system that works for everyone and achieves the intended goal of protecting people’s privacy. Luckily with precedent like GDPR and CCPA to work from, as well as a slew of intelligent voices, we are not going in blind.
Number Two: America has an opportunity to learn from GDPR
GDPR (The EU General Data Protection Regulation) took the international tech world by storm. Most people likely at least remember receiving a hefty stack of emails from companies regarding rewriting their EULA’s and other legal documents. If you are like me, you probably saw the laundry list and clicked the handy “Mark all as read” button. Ironically, this gets to the core of one major flaw with GDPR.
According to Prof. Layton and Ms. O’Connor, one issue with a law like GDPR is that is simply compliance bloat and, while containing excellent ideological steps in the right direction, in practice it does not accomplish much by way of increasing confidence. After all, does anybody really read those agreements? Perhaps some, but even then it would require at least a working knowledge of legal jargon. As Ms. O’Connor points out, there is no freedom or control, or at least not the perception of it, in a yes-or-no signature. More on this later.
Frankly, GDPR will continue to be at the core of the discussion, and it should be. As the adage goes: “those who do not learn from history are doomed to repeat it.” Granted it is a short history for legislation around data privacy, but the representatives were keenly aware to keep this in mind throughout the hearing with consistent fallbacks to Prof. Layton to vet what other panelists would say using the GDPR precedent. Although, Layton herself even points out that the framers of GDPR admit it may be years before a truly substantive judgment can be made.
Constructing this legislation is going to be much like building a plane while piloting it. Every decision has the potential for large scale impact as it affects an infrastructure that rooted itself in society before anyone thought to regulate it. While lawmakers in the hearing did show deference to the wisdom that can be gleaned from GDPR, it is also clear they are cognizant of its nascent nature.
Number Three: The data industry will be treated as interstate commerce requiring one national standard
This point is perhaps more straightforward and has more consensus than any other portion of the debate. Even the conservatives on the committee, often the self-described champions of states’ rights, are completely on board with the need for a federal preemptive law. This means that there will almost certainly be bipartisan support for a national standard that will override any state law.
The constitutional argument for this lies in the commerce clause, which dictates that Congress holds influence over interstate commerce. As multiple panelists point out, it is understandable a state like California—home to many of the tech giants at the center of the discussion—would move to create a law when there is no federal action, however, the chaos that will ensue when other states follow suit would be astronomical. As it is, companies now need to be concerned with complying with CCPA, the California law, and GDPR if they wish to operate in Europe. If suddenly every state has patchwork laws defining their own standards, this creates an absolute nightmare for businesses of any size trying to exist online.
Almost without exception, all representatives at the hearing were concerned with this point. Whether it was from a civil rights, economic, or compliance standpoint, there was common attention for the need for standardization. There is perhaps some rhetorical credence that Democrats would not be all-in on preemption, perhaps because the states with strong leadership are left-leaning states. But from the hearing, it seems clear any such notion will likely be a blip of the discussion and the industry should prepare for preemptive and comprehensive legislation accompanied by an upgrade to the Federal Trade Commission’s oversight and enforcement power.
Number Four: There will be heavy scrutiny around accountability and transparency
Here we arrive at the meat and potatoes of the discussion, and likely the factor that will most affect the daily happenings in businesses. Throughout the hearing, there was a lot of talk about how those in the data world are going to be held accountable for the data they collect and use. Ms. O’Connor frequently came back to saying that notice and consent are not choice or control. The hearing did not get too in the weeds on how this might be implemented, but it mostly boils down to consumers having a right to access, alter, and delete information about them.
It is unclear what the scope of this would be. The panel was generally in agreement that any personally identifiable information (PII) must be protected in this way. However, there seems to be less clarity around this extending to all data online. Ms. Collins-Dexter, for example, would advocate that there needs to be much more oversight around non-sensitive data as well regarding situations such as the telemetry data from automobiles. However, other panelists ranged from ambivalent to negative on this front. The industry talking heads were mostly fielding concerns about limiting the ecosystem too much and what infrastructural effects that could have, and any thought of control would be prioritized lower.
Regardless of how this pans out, it is looking like companies collecting or using data will need methods by which to allow consumers to have this agency over their online self. In a step further, Ms. O’Connor also suggests limitations on data communication to 3rd parties. However, in an effort to improve their own stakes in privacy, browsers seem to already be moving in this direction. With Safari leading the way, it is not crazy to believe the giants like Chrome and Firefox might soon follow suit, especially given that Google carries the lion’s share in online tracking anyway. In fact, from a market share perspective, more limitations and compliance look great to organizations like Google…but let’s not jump there quite yet.
The question still remains: how do we give consumers control and choice in a way that respects individual privacy rights but also allows companies to use data in non-invasive ways? After all, this is why we have a free and convenient internet. At MetaRouter, we believe this largely is rooted in moving towards server-side solutions that put control back in the hands of the organizations who have the most to lose from the consequences of bad data practice (besides the actual consumer of course). But even so, just having the control is not enough on its own.
Professor Layton brings up in the hearing that GDPR has not increased consumers' confidence, yet the regulation requires that people have access to data. This is where the panel argues it breaks down in the implementation. More agreements, more signatures, more consent, these really only cause more confusion and distrust. It also creates legal requirements that inhibit companies with fewer resources to keep up. There will have to be a solution that protects new regulations from enacting cumbersome minutiae for companies and consumers and enabling a system that is scalable and effective for all.
Number Five: In order to protect competition and innovation, compliance should be elegant and simple
Piggybacking off of discussion around scalability and efficacy, the last key takeaway is that there is consensus for a need to protect small businesses and startups from a laundry list of regulation. Nearly all panelists argued that too much regulation ultimately discourages many smaller organizations to get into the data space as there is too much risk associated with compliance issues and legal infrastructure.
Representative Lujan went as far as to suggest that perhaps companies should be required to have a Chief Protection Officer. This was not entirely dismissed by the panel, however, there were concerns over the strain this could have on small businesses.
It is easy enough for the Googles and Facebooks of the industry to comply with every minute detail of a legislative package and set up a strong legal defense when they don’t. However, this can be a daunting task for a small team that may have surface compliance knowledge at best. And with GDPR, this actually led to Google increasing its market share of ad tech after it passed, according to Prof. Layton.
Another side of this coin are services such as social media and news publication that are largely kept open and available by data collection and the advertising that relies on it. According to Prof. Layton, with GDPR, many American publications had to stop all traffic to Europe as the compliance burden and cost are too high. This means the millions of Americans there now have no access to these resources, and Europe becomes more separate from the outside world.
Ultimately, nobody on the panel is asserting that creating regulation around data practice and privacy is altogether flawed. Instead, there is a sense that the final approach is going to need to be more simplistic and philosophical in nature. With attention detail and nuance around specific practices that need to be cracked down on and allowances for people to have control and choice over their online data, but without the needless bloat that will only lead to more legal overhead and in turn more consumer distrust.
The last takeaway ended with this notion of a philosophical overhaul, and this seems like the ultimate direction that the panel would suggest and perhaps the committee will take up. This could look like a new bill of rights for individuals that extends many rights we theoretically enjoy in the physical world to the online world. This does not mean a lack of substance; after all, legislation is merely the armor designed to protect those rights that form the foundation.
Ultimately, the panel may have been too critical of GDPR. Prof. Layton herself described the time needed to truly understand its impact, and it has many of the items that the panel has called for: the right to access, right to be forgotten, privacy by design, etc. Certainly, such legislation should not have the effect of limiting innovation, but if we truly wish to treat privacy as a right—which data show people do—then there have to be allowances made to that end.
What does the future of this discussion look like in America? It seems from the hearing that legislators would like to move on this as quickly as possible, and there is strong bipartisan support around the issue. Additionally, with CCPA going into effect next year there is a pressure to roll out change before state measures drive chaos.
One other potential facet of the discussion revolves around hardware invulnerability. This may be a harder task to take on as far as near legislation is concerned. However, in light of a Bloomberg article released last year around a high scale infiltration of hardware, this will likely be the next step of privacy assurance after the foundational rights and protections have been established.
One other factor that will be interesting to watch in the long run, but may not have much effect in the short term, is how digital literacy plays into all of this. As mentioned at the beginning of this piece, so few people really understand the workings of the internet even at the most basic levels. But as future generations are brought up to understand the digital world as foundational to our lives, much like literature and social studies, there is a huge opportunity for greater online citizenship. But for the near future at least, the dialogue will be dominated by how we can at least establish the basis for a just digital world.
Why does MetaRouter care?
Our ears perked up at first notice of this hearing because since our inception data security has been a core tenet of our philosophy and development. This has allowed us to not only be prepared for the likes of GDPR but also has uniquely positioned us around questions of control and access.
We are excited to see that society is moving towards enshrining digital agency as a new arm of basic rights, and we are keenly aware that this means infrastructure is going to need to be ready to deal with that. As the likes of 3rd party tracking and pure client-side implementation become headaches, solutions such as our Enterprise Edition and server-side implementations are going to be essential for allowing these newfound principles to be upheld in a scalable way.
This conversation will not end with legislation, and neither will our voice. Every point hit on here is its own set of discussions and outlooks, and we intend to dive further into this through the lens of our experience and ideology around the crucial importance of allowing anyone who needs to use data to do so in a flexible and secure way. It is our mission to do our part to help the future of the data-scape look even more sustainable and accessible. We hope you will stay engaged with us as we continue to unpack these topics and the ongoing problem-solving of engineering a truly data smart ecosystem.