CCPA, CPO, GDPR, IAPP, Information Management and Governance, Information protection, Privacy

Does Privacy Need Disrupting?

Executive Summary

When it comes to the use of data in a business context, there are a few absolute truths: (1) business will continue to gather and process more and more information about people to meet their goals. (2) We will continue to see larger and more far-reaching data events involving personal information.  And (3) regulators will continue to respond with increasingly complex requirements around the handling of personal information.

This paper reflects on the trajectory data-use is taking within the business environment, and explores some challenges the privacy profession is facing trying to keep pace.  The combination points to the inevitability of catastrophic data incidents.

But like so many other industries, modern technology may hold the answer to managing the risk.  This paper goes on to discuss that through the measured deployment of disruptive technologies, the privacy profession may find a way to support the acceleration of data use in the business, while managing risk and pursuing compliance.


The thing about black swans is that they are both predictable and unpredictable – you know they are going to happen, you just can’t anticipate when and the form they will take.  In the period of one week in December, over 600 million records containing PII (Personally Identifiable Information) were breached. For perspective, that’s more than every man, woman and child living in the US, UK, Canada, Australia and Russia combined.  

With the increasing volume of PII being collected and processed by organizations around the world, it was inevitable that something like this was going to happen.  Moreover, it will happen again – and bigger – from triggers and vulnerabilities on which the risk community is not focusing. And no global organization wants to be named in a headline that talks about hundreds of millions of records being compromised.  

About data

We live in an age where information is emerging is a truly leverage-able resource for companies around the world, enabled by the incredible pace of change in technology and analytics capabilities.  The opportunities to improve customer experience are growing exponentially. To be sure, customers now measure their own satisfaction – and loyalty – based on capabilities offered by service-providers that were not even possible a few short years ago.  And companies are doubling-down investment to outpace their competition, or in many cases – in the face of disruptive startups – ensure their very survival.

Much of the data at the heart of the most promising innovations is in some way tied to individuals — whether traditional PII or PHI or new data around people’s movements, tastes and behaviors, spun off from IoT sensors, new analytics technologies or apps used by individual consumers where they are knowingly or inadvertently contributing data.

We also see that as some companies push the boundaries, or in the aftermath of high profile data incidents, lawmakers are reacting by implementing far-reaching legislation to protect the rights of individuals.  Complying with those is a challenge and imperative for all organizations but especially forward-looking global organizations, as they navigate uncharted waters and as regulations emanating from different jurisdictions overlap and conflict.  

Given the pace and trajectory of developments in technology and data, and the scale and frequency of data events, it’s reasonable to predict that there will be more breaches in the future – both larger in scale and more impactful.   Moreover, the increasing number and complexity of regulatory requirements – many triggered in the aftermath of data breaches – will place increasing burden on businesses, increasing internal tension between those developing new and innovative products and services, and those tasked with managing risk and ensuring compliance.  Finally, the potential ramifications of a breach, including the very significant fines, lost business or damage to the brand, can have lasting negative consequences to any organization.

Risk and privacy activity today

Today, risk management and privacy are heavily manual.  Risk management and privacy groups are relatively compartmentalized, often viewed as necessary but imposing layers of bureaucracy, addressed late in the process and after the business requirements are met; risk and privacy requirements are often viewed as disruptive and costly.   

Whereas “Privacy by Design” seems like an obvious enabler, and has been a holy grail of sorts, passionately embraced by privacy practitioners, it’s often down-played (or ignored) by business development groups.  

The basic process around risk and privacy include the following:

  1. Privacy Policies that reflect requirements — whether legal, contractual, ethical, professional or industry parameters.  This establishes the inward- and outward-facing posture and serves as the foundation and basis that drives every meaningful aspect of the program.
  2. Process documentation: business processes that handle PII are documented and analyzed to identify risk and to ensure that controls mitigate the risk and align with policy requirements.  
  3. Data and application inventories: as a supplement to process documentation, knowing what data is on hand and what applications process it is important to help ensure that appropriate controls are in place
  4. Trigger points within processes – IT or business processes – around changes or data events requiring action; certain activities such as developing or changing an application that stores or processes PII should trigger a Privacy Impact Assessment to determine what risks exist and what controls are needed.
  5. Consultations and approvals where SME’s respond to inquiries and use research and professional judgment to provide recommendations.
  6. Risk assessments take place periodically to determine what’s changed and whether controls are aligned with risks to PII.
  7. Controls are tested periodically to ensure they are functioning as intended
  8. Control weaknesses or failures are documented in findings reports requiring action by control owners

The process is largely manual

The key point in providing this list is to highlight the fact that all of these are manually intensive and are at best supplemented or enabled by tools such as GRC applications.  And while the enabling tools and applications help, these processes are only linearly scalable – meaning, increases in the number of in-scope processes and applications require a proportional increase in resources — people — to accomplish the risk and compliance activity.   Moreover, while the most effective privacy programs distribute the activity across the business constituents, and can gain some leverage and economies of scale, the costs fundamentally increase fairly linearly.

Most organizations face challenges in trying to increase their bench of Privacy SMEs, since they require in-depth understanding of their organizations, as well as privacy expertise, and need to exercise consistent and similar judgment.  So maintaining consistent quality around advice provided by SMEs is a risk and challenge in itself.

So in summary, the technology, data, business and regulatory environment is evolving rapidly, getting more complex, and more critical for the continuing success of the organization.  Traditional privacy risk and compliance practices are heavily manual, reactive, burdensome and difficult to scale. In combination, it’s clear that costly and damaging issues will continue to arise, and the tension between the execution of business strategy, managing risk and maintaining compliance will become even more pronounced.

What is changing…

In order to become better embedded and get ahead of business developments that leverage data, the privacy function needs to understand how the business plans to gather, manipulate and store PII, and overlay the risk and compliance requirements for its treatment and handling – which should result in certain adjustments to the business strategy.  

The privacy team has to understand all aspects of information risk management (leveraging an auditor’s playbook) to judge sufficiency of control, and be able to interface with the business, IT, IT security, legal, audit and compliance stakeholders, as well as with regulators.  

An important dimension of this is to have a framework for accepting residual risk.  This framework has to resist the “group-think” temptation to be either blinded by competitive pressure or the promise of fantastic profits, or lured into the “risk elimination” mode.  Instead, it should allow for the analysis of risk, mitigating effect of controls, and a transparent mechanism to accept residual risk that escalates upwards through leadership, depending on the overall risk/benefit balance.

But as discussed above, data “events” are bound to happen — whether breaches, losses or abuses — and privacy professionals too often are reactive.

Fundamental and disruptive change – leveraging Artificial Intelligence

Business, technology and data science will continue to accelerate, events will happen and regulations with come into effect.  The result is an increasing tension between opposing forces, where the resistant compliance side of the equation will almost always lose.

It’s time to take a fresh look at the model.  Increasingly, companies are recognizing the disruptive effect that data and analytics (including AI) will have on their business – the very action that increases the risk of privacy events discussed in this paper.

Privacy compliance can benefit from disruption.  

Ultimately, many aspects of privacy compliance will benefit from the disruptive use of AI and cognitive algorithms.   Given that privacy compliance combines documentation, analysis and judgment, there are opportunities to design and train algorithms to assist analysis, which will increase the timeliness and reach of the program.


First and foremost is the recognition that intelligent automation and leveraging AI is a journey – not a destination – and benefit is gained incrementally.  Focus begins on the more basic and mechanical aspects of the program, allowing more analyst time to focus on more sophisticated and complicated issues.

The privacy activities are then broken into categories which helps to drive priorities:

  • Routine daily tasks that need to be monitored for compliance, and where certain events trigger action, and
  • Change involving new applications, data, business ventures or data use cases
  • New requirements, such as new regulations, risk factors or data use restrictions

As the process matures, more aspects of the program can be automated, leading to a state where increasingly sophisticated tasks are processed automatically and the SME is engaged at certain thresholds where, say, more judgement or specific approval is needed.  If properly implemented, the algorithms are trained methodically (“crawl, walk, run”) and logged to ensure consistency.

Example activities that are candidates for automation:

  1. Process review comparing to policy – using an algorithm to determine whether a proposed process might violate a privacy policy
  2. Access monitoring – data stores containing information pertaining to people can be monitored for access and AI can analyze access for anomalies, and trigger responses
  3. Data access requests – routine operational transactions, such as requesting access to certain data, can be vetted and handled through Intelligent Automation
  4. Transaction monitoring – AI sensors can be tuned to monitor a wide range of structured and unstructured transactions guarding against inadvertent use of private information
  5. Privacy event analysis/DLP – Data Loss Prevention (DLP) sensors can capture thousands of potential events on a daily basis.  AI can be used to risk-assess the events based on a variety of rules, and flag those exceeding a predetermined risk threshold for further investigation.  
  6. Control analysis and testing – privacy programs often include a periodic testing cycle.  AI can be used to evaluate the results of testing to assess severity
  7. Data discovery and inventory – All organizations have large volumes a unstructured data stored (and often forgotten) on network file servers.  AI can be used to traverse the file stores and build meta-data tables around the data, and can be tuned to identify sensitive data, helping to ensure compliance
  8. Data psudonymization – AI can be used to implement psudonymization techniques on a large scale, and can test whether the data can be re-identified.
  9. Contract review – often times additional specific data handling terms are embedded in contracts with large clients.  AI can be used to extract those terms and correlate them to specific data in the environment to help comply with the client’s requirements.
  10. Regulation review – AI can be used to highlight applicable sections of regulation based on ingested company policy documentation, which accelerates implementing compliance activity
  11. Risk analysis – Algorithms can be trained to detect data use-cases that are in conflict with policy.
  12. Residual risk assessment – Quantifying residual risk is very important for determining whether risks are sufficiently mitigated to meet corporate risk appetite, and whether a value proposition is still valid.  AI can help with the determination.
  13. Customer inquiries – Intelligent automation can be used to handle customer inquiries around where data is, requests for erasure or transfer.  This can be extremely burdensome for companies with large numbers of individual customers.


All these use cases are within the capabilities of existing technology, and the decision to pursue any combination is based on specific circumstances.  However, the overriding point is that they pave the way toward much more flexibility and scalability of a privacy program that is coming under increasing pressure to perform.  So the benefits are:

  • Greater flexibility
  • More scalability and leverage of resources
  • Lower risk of non-compliance
  • Less impact and burden to the business
  • Managed cost


At a high level, the risks are that the tools fail to detect or prevent an unauthorized use or disclosure of information pertaining to individuals.  This can be because the algorithms don’t work as intended or are not properly implemented. These are project and operational risks and should be managed through normal risk management processes.

But by keeping in mind the current state and the trajectory business is on, the reality is that leveraging Intelligent Automation and Artificial Intelligence makes sense.  It’s going to happen.


When it comes to the use of data in a business context, there are a few absolute truths: (1) business will continue to gather and process more and more information about people to meet their goals. (2) We will continue to see larger and more far-reaching data events involving personal information.  And (3) regulators will continue to respond with increasingly complex requirements around the handling of personal information.

Many industries are being disrupted by the creative and innovative use of data.  The privacy profession — increasingly in the spotlight, yet dependent on manual processes — is quickly becoming a good candidate for reinvention.  People will benefit, as it will open avenues for business to provide new products and services designed to make their lives better, while at the same time lowering the risk to them for participating.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s