GDPR and insurance companies: what will change?

The General Data Protection Regulation (GDPR) will come into force on May 25th. In the meantime, companies in the insurance sector, like the others, must comply with its new requirements ensuring that organizations are properly managing the confidentiality of the information they have transferred or collected from European citizens. But what will change? What advice does the CNIL, a reference organization in France regarding the application of the GDPR, give to insurance organizations that even US companies can apply?

A “compliance pack” called to evolve

By May 25, 2018, the enforcement date for GDPR, the CNIL has planned to update (and propose new) its compliance packages. First affected is the insurance sector. It must be said that insurance companies collect a considerable amount of data every year, which allow them to create personalized offers, adjust tariffs, or follow the evolution of the market and consumer needs.

The insurance compliance package proposed by the CNIL must therefore be enriched soon with a GDPR side, in addition to the reminder of the standards to which these companies are subject. Still, it is possible, by studying the texts of the new General Regulations on Data Protection, to outline the contours even more.

Remember: the rights of your customers

Let’s start with a quick reminder: what are the rights granted to your customers by the GDPR? The most important are undoubtedly the following ones. These are the ones that will require a whole new approach to information governance in the insurance industry:

  • The right of access to the data
  • The right to be informed about the processing of the data used
  • The right of rectification
  • The right of opposition
  • The right to portability of data, in some cases (we’ll talk about this again)
  • The right to be forgotten

All of these rights, such as the right of access to data for example, are not fundamentally new; most are already registered in the Data Protection Act of 1978. Those that already existed are nevertheless strengthened, reaffirmed and harmonized at European level.

Thus, in the insurance sector, it is essential to master (and be able to communicate) the following information: the personal data recorded, their provenance, the names and roles of the persons authorized to use them, the purpose and use of the data as well as their location, and who has access to that data. Article 18 of the GDPR allows any holder, past or current, of an insurance contract the right to receive a copy of his personal data, all in a common format and easily readable.

Insurance: how to be in compliance with the GDPR?

As an insurance company, you can not take the risk of not being in compliance with the requirements of the GDPR. To comply is to avoid a commercial risk (a sanction could have unfortunate consequences in terms of images and reputation) as well as a significant financial pitfall – the fines can go up to 20 000 000 € (US $23 million plus) , or 4% of the annual global turnover (of the two, the highest amount will be retained!).

Therefore, the first step to comply with the GDPR is to appoint a DPO, for Data Protection Officer (Delegate for Data Protection). Its mission will be to ensure that the law is respected and that processes are put in place to enhance the transparency of your company. In particular, he will have to make sure that you will be able, as of next May, to:

  • To group all the exchanges with the customers, whatever the points of contact used by them (mail, telephone, mail, passage in agency …) within the same document
  • To demonstrate that your customers have consented to the use of their personal data
  • To clarify, in the case of institutional control and at the request of customers, the use made of personal data
  • To set up information governance, based on documentary traceability, storage security and responsiveness

What the CNIL recommends

The work required to get GDPR compliant must be implemented gradually. Thus, the CNIL recommends for insurance, as for other companies, to carry out 4 main operations.

  1. First, an organizational component, with the designation of the DPO and its hierarchical position, and the setting up of steering committees.
  2. Then, a site “risks and internal controls”, allowing you to take stock of the current practices and the elements to be corrected.
  3. It should be followed by the deployment of information governance tools (access, traceability, security, communication…).
  4. Finally, an awareness step, internally and externally, on the new governance of information, will have to complete the implementation of the GDPR in the insurance sector.

Compliance with GDPR is not optional for companies in the insurance industry. If you’re looking for help figuring out what you need to do, give us a call.

 

7 Reasons Legacy ECM should be replaced – Your Data Migration Strategy Simplified

Shifting to a Modern Enterprise Content Management System

Current Situation…

Setting a data migration strategy is vital as there are many challenges with legacy systems. With Everteam, your data migration strategy is simplified.

One of the most difficult challenges CIO’s are facing is maintaining and upgrading legacy systems like (FileNet P8 and Content Manager 5.2.1, LaserFiche 8 and previous versions, Documentum 6.5 etc…). While technology continues to evolve, the business value of legacy systems weakens as enduring with legacy systems brings with it countless disadvantages that can do tangible damage to your company. Here is how; Legacy IT systems are no longer prepared for change as software editors have discontinued support to those systems. This also means that your company will be paying extremely high maintenance costs. With all those costs escalating, security threats also increase as legacy systems make security worse and not better because of their age especially that installing upgrades and patches are no longer enabled. This consequently affects performance and meeting customer terms becomes impossible. New generations use new technology. This new technology is able to keep up with volume and performance unlike legacy systems which have restricted update features. We also have new generation employees which are also more familiar with latest technologies. Imagine the difficulty in finding someone with the knowledge and technical skills for legacy systems.

Urgent Need for a Data Migration Strategy

Real Issues with Legacy Systems that WILL Damage Your Organization…

Let us see why does it make sense to migrate old legacy systems before they continue to hold your company back and how  your data migration strategy is simplified.

 1) Discontinued support from software editors                        

Many in the IT industry are focusing on improving the standard of operating systems used in organizations, IT professionals refuse to further support legacy systems and are instead forcing their clients to upgrade.  The older the application gets the more difficult it becomes to quire support. For example, IBM support for older software and hardware for version 05.02.01 (IBM official support) will be discontinued in September, 2018.

 2) Performance and Security-related threats

Legacy Systems are unable to keep up with volume, performance and high availability. This means an increase in manual labor. Consequently, there will be a higher potential in bottlenecks and other inefficiencies. More time will be spent trying to understand high volumes of data instead of focusing on real tasks at hand that could actually affect employee performance and efficiency positively. This also creates a higher risk of data loss and security related threats, without the ability to install upgrades and patches.

3) Higher Costs

Probably the most obvious disadvantage in legacy systems. Not only is the cost for maintaining old systems high, but there are other costs which make legacy systems expensive.. These costs include; hiring specialists familiar with legacy stems, support engineer pay rates as most engineers are not familiar with legacy systems. Specific IT environment and hardware needed to run the solution as legacy systems cannot be installed or run on any existing environment. Finally, since legacy systems are based on old technology,  these systems won’t be able to supports their company’s constant evolving needs,  resulting in increased costs for expanding the solution, installing upgrades developing new features, deploying new solutions or integrating with other systems..

4) Difficulty in Finding Experienced Labor

 There is no doubt the legacy systems are based on antiquated technologies. Therefore there has become a lack of technical knowledge needed to operate and maintain such systems. New generation employees are more familiar with latest technologies.   It is very challenging and tough for companies to find expertise who work with legacy systems and know how to operate them. Instead you can easily find IT support personnel already familiar with the latest new generation technologies and databases, saving time, effort, and costs related to maintaining old-fashioned technical skills.

5) Client Differentiated Versions

In this section, we are going to address our experience in the Middle East specifically. As there are country specific standards for data and information exchange especially when dealing with government entities, set by each country in the Gulf (for example YESSER in Saudi Arabia, MOTC standards in Qatar etc.) these requirements cannot be disregarded when putting in place a large-scale enterprise content management implementation. One of the many challenges faced is the involvement of software integrators (i.e implementers) in most implementations happening in our region, due to the fact that software vendors are not physically present in those regions. Therefore, and also due to the customers’ ever-changing functional and technical requirements, the customer ends up with a custom-developed solution where a different version is installed at each client.  Upgrade in those cases becomes difficult if not impossible… This eventually leads to discontinued versions in addition to discontinued application support.

6) User Adoption

User adoption of the system is almost always a necessary ailment for the success of the project. With legacy systems there are many challenges related to user adoption, particularly for newer end-users. Reason being that the newer end users are not familiar with legacy systems. If the company chose to train those employees, it will be very demotivating to train a new generation on an old system while everyone else is evolving.

7) Lacking Customer & User Experience

The only fashioned legacy systems are not familiar with all interfaces customers use now-a-days. Examples include tablets, smart phones, and laptops without forgetting web-based user interface. Consequently performance is affected creating a negative user experience with a deteriorated document viewing and annotation. This affects the overall business performance and innovation.

8) Migration with Everteam…

Everteam has built a very good reputation over its 25 years’ experience in the field with user experience at the center of all implementations. Migrating to an Everteam Solution will provide you many competitive advantages that would make you wish you migrated a long time ago. All our clients have the same solution version customized to fit their individual business needs. Our solutions which are proven solutions worldwide are well in sync with the latest technologies in the field of ECM. With an average of two months implementation time frame, your organization can will get a modern browser-based interface, with improved performance and user experience. Everteam has offices all over the MENA region to offer customers a direct level of support sparing them the hassle of dealing with resellers who tend to make the experience unpleasant.

But wait, there is a simpler way! With Everteam, you don’t have to migrate, you can simply archive your legacy!

With Everteam, setting a data migration strategy is facilitated. A simpler approach to data migration is data archiving.  Instead of migrating older unused data you can instead archive it  so that it is kept and referenced in the current system. This is suitable in cases where older data is important to the organization and may be needed for future reference, or data that needs to be retained for regulatory compliance. Archiving data is simply a win-win scenario using a pre-determined set of business requirements to move data and or documents from legacy systems to a cheaper, secure and accessible storage. This journey is very successful and very simplified. Think of It as three main stages. 1) Capture data using our standard connectors and upcoming ones, 2) manage data whether by classifying it, applying retention rules and destroying it based on pre-defined workflows 3) finally store records while making them available across all search points within the organization for descriptive and predictive analytics.



I Want To Migrate With Everteam!



Blockchain Technology

Redefining the Future of e-Services?

We have noticed many changes over the past decade in the digital pattern of the internet and many developments have happened. One of the latest and most repeated over the tips of tongues of CEOs and CTOs, startup entrepreneurs, and even governance activists, is blockchain technology. While some of you are slightly familiar with blockchain, there remains a wide majority who could have heard about it without really understanding how this new technology will redefine the future of internet transactions.

The internet became a tool for us to decentralize our information, it allowed us to interact with anyone by sending any piece of information in less than a second. Yet, with all the advancements the cyber world brought with it, it still had the gloomy and risky side to it as any piece of digital information is at risk. Online store owners remain to put themselves and customers at jeopardy for payment fraud. This is due to insufficient internet safety.

The simplest example of online transaction, is online payment. When you buy a specific item online, the transaction of taking money from the bank involves many intermediaries and all of those intermediaries take a transaction fee which makes it very costly. In the real world however, you do not need an intermediary that needs to check the money and transfer it. Now, you didn’t think I would be sharing with you all this without giving you some good news towards the end right? Today through blockchain, we can send money like we are sending an email. Blockchain was invented to create the alternative invention to currency which is known as “Bitcoin” which may also be used for voting systems, online signatures and many other applications. To understand bitcoins, we need to understand what a blockchain is first.

So, what is Blockchain Technology?

Blockchain stores information across a network of personal computers allowing the information to be disseminated, therefore, no central company owns a system which allows the protection of the integrity of each piece of digital information. This different approach to storing information, is suitable for environments that have high security requirements and value exchange transactions as no single person is allowed to alter any record. It does not only allow us to create safe money online, but it allows us to protect any piece of digital information, examples include contracts, online identity cards and so on. So what is bitcoin again? Bitcoin is a form of digital cash (cryptocurrency) which can be sent to anyone across the internet.  Using bitcoin means there is no intermediary involved.  The verification of the transaction happens with a network of different people all over the globe who help by validating other people’s bitcoin transactions. Blockchain tracks records over this digital cash to validate that only one person is the owner of it.

Just the Beginning of Blockchain Technology

Blockchain technology brings with it so many advantages and this is only the beginning. Blockchain has proved to cut costs, ensure cybersecurity, empower users, reduce clutter of multiple ledgers, and most importantly prevent transaction tampering. All this is just the beginning to such innovation. This is just the tip of the iceberg as they say, since blockchain technology is predicted to develop so much that soon we will be able to protect our online identity, and track many devices on internet of things. Blockchain has in fact extended its reach beyond the alternative payment system to revolutionize the entire IT world. For example a refrigerator which is connected to the internet whilst being equid with sensors could eventually use blockchain to manage automated interactions with the external world. These interactions may include anything from ordering and paying for food to arranging for its own software upgrades and tracking its warranty.

Blockchain Technology and Data Management platforms…

Blockchain technology has shown that it is not only convenient for financial transactions but also for other sectors that deal with Information and Data such as the public sector… Blockchain technology can simplify the management of information; information is managed in a secure infrastructure due to its decentralized nature, giving blockchain leverage over other digital technologies.

These sectors value privacy therefore managing confidential information becomes very critical when it comes to information security. Public and government organizations have their doubts on storing their data on cloud as they have specific needs like storing data within their borders for security and political reasons. The type of data used and shared within those organizations includes highly confidential records such as birth & death certificates, marital status certifications, licensing for businesses, criminal records and even property transfers. As a consequence, though organizations in the public sector are electronically managing their data some records still remain in hard copy, forcing people’s onsite presence to complete record-related transactions

The strength of blockchain comes from the way it was built as series of blocks that record data in hash functions with timestamps so that data cannot be tampered with. This gives government organizations the guarantee for secure data storage that cannot be manipulated or hacked, leading to improved management of information in the public sector paving the way again for fully smart and secured cities and environments.

There is no doubt that the blockchain technology has gained a wide share in the marketplace and everyone is questioning whether ECM providers (such as Everteam) will be combining blockchain into their solutions. You can rest assure that we will be integrating blockchain technology into our solution, as a matter of fact our blockchain connectors are predicted to be released in late 2018, so stay tuned! Subscribe to our newsletter HERE.

 

 

Who are the Data Controllers and Data Processors in GDPR?

In my last Blog, I talked about the definition of Personal Data and the various data protection actions that Data Controllers and Data Processors made apply to this Personal Data (Anonymize, Pseudonymize and Minimize).

But who are these Data Controllers and Data Processors?

These are the parties that capture, process and store Personal Data belonging to Data Subjects. Under the GDPR Regulation, these parties have obligations to protect the Personal Data of these Data Subjects.

Data Controllers/Data Processors

Data Controllers

This is “the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data; where the purposes and means of such processing are determined by Union or Member State law”;

In plain English, this is the party (individual, entity or authority) with which the Data Subject exchanges his or her Personal Data to receive the goods and services.

The GDPR Regulation imposes a range of data protection obligations on the Data Controller, including:

  • Restrict the scope of data that can be collected and the duration of retention of this data
  • Seek and obtain the consent of the Data Subject BEFORE the Personal Data is captured
  • Once received, protect this data
  • Notify data controllers if/when a data breach occurs
  • Appoint a Data Protection Officer or DPO (under certain conditions) – covered in a future blog

Data Processors

Similarly, the Data Processor is “the natural or legal person, public authority, agency or other body which pro-cesses personal data on behalf of the controller.”

This is the party that performs part or all of the processes on behalf of the Data Controller. One of the game changers with GDPR is that Data Processors also have obligations under that regulation and that these obligations also apply even to Data Processors located outside EU jurisdictions, example a US-based cloud provider performing data processes on behalf of an EU-based Data Controller located within the EU:

  • Must implement specific organization and technical data security measures
  • Keep detailed records of their processing activities
  • Appoint a Data Protection Officer or DPO (under certain conditions)
  • Notify data controllers if/when a data breach occurs

In view of these GDPR obligations, Data Controllers must do more diligence to the processes by which they select new Data Processors and re-qualify existing ones.

Data Controllers must also determine whether they fall under the GDPR Regulation and identify their responsibilities and measures they must implement vis-à-vis the Personal Data they process.

Lots more to talk about here, but suffice it to say that organizations that fit the definitions of Data Controllers and Data Processors should assess their GDPR-related Data Protection obligations and implement measures and technology-based solutions to enable and enact their compliance.

I will cover further aspects of the GDPR Regulation in upcoming blogs, namely the rights of Data Subjects.

Bassam Zarkout

Personal Data in GDPR and How You Can Deal With It

In my last blog, I made a general intro of the EU General Data Protection Regulation (GDPR), the upcoming directive for data privacy due to come into effect on May 25th, 2018. GDPR grants broad rights to Data Subjects over the way their “Personal Data” is handled. It places obligations on “Data Controllers” and “Data Processors” to protect the Personal Data of “Data Subjects.”

In this blog, I will focus on the topic of “Personal Data.”

GDPR Chapter 1 Article 4 defines “Personal Data” as

“any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person”.

  • Data: stored information
  • Personal: the information relates to an identified or identifiable “natural” person – meaning the identification of the person (an individual) is possible using the data

The GDPR definition of Personal Data is wider in scope than commonly used terms like PII (Personally Identifiable Information), PHI (Personal Health Information), and PCI (Payment Card Industry). In fact, Personal Data can relate to any mix of the following:

  • Personal: name, gender, national ID, social security number, location, date of birth
  • Physical, genetic, psychological, mental, cultural, social characteristics, race, ethnic, religious, political opinions, biometric, etc.
  • Online computer identifiers
  • Medical, financial, etc.
  • Organizational: recruitment, salary, performance, benefits, etc.
  • Other

It is worth noting that GDPR does not apply to deceased persons. However, their data “may” be deemed personal for their descendants if this data gives hereditary information. Also, the “identifiability” of a Data Subject is a moving target because it depends on his or her circumstances.

There are three important terms to learn about regarding Personal Data in GDPR:

GDPR_PersonalData

Anonymize Personal Data

Data Controllers and Processors can protect Personal Data by anonymizing it. This is the permanent modification of Personal Data in such a manner (randomize or generalize) that it cannot be attributed back to the Data Subject. It is also an irreversible process, meaning that the data cannot be restored back to its original identifiable form. Anonymized data is not subject to GDPR restrictions.

Pseudonymize Personal Data

Data Controllers and Processors can pseudonymize personal data by processing it in such a manner that “it can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person”.

This approach:

  • Carries a higher risk than anonymization and requires technical and procedural controls.
  • Strikes a better balance between the interests of Data Subjects and those of Data Controllers/Processors.
  • Pseudonymized data is subject to GDPR controls since Personal Data can be re-identified from it.

Minimize Personal Data

The GDPR states that Personal Data should be “adequate, relevant and limited to what is necessary for the purposes for which they are processed. This requires, in particular, ensuring that the period for which the personal data are stored is limited to a strict minimum. Personal data should be processed only if the purpose of the processing could not reasonably be fulfilled by other means.

The word “necessary” is critical here. It means that the Data Controllers and Processors can only collect data that is necessary for the purpose of the transaction with the Data Subject. They can also retain this data for a strict minimum period.

In my coming blogs, I will cover the various rights of Data Subject vis-à-vis Personal Data, for example:

  • Right to consent
  • Right to be forgotten
  • Right for rectification
  • Right for data portability
  • Right to object
  • Right for limited usage of collected data
  • Right to be notified of data breaches

Bassam Zarkout

Subscribe to the InfoGov Insights newsletter to stay up to date on things related to Information Governance.

An Interview: Modernization and the Legacy Systems Headache

Our own Dan Griffiths was approached this past summer to provide his insights into the legacy system headaches that organizations face today. You can read the full piece on the Finanicer.com website in their August issue, or you can read on to hear Dan’s views on modernization.

SYNOPSIS
Although they may well have been considered state-of-the-art in their day, the IT currently being utilised by many financial institutions (FIs) is several decades old – legacy systems that are pretty much creaking at the seams. Without doubt, the continued use of such outdated systems (often large and cumbersome IT infrastructures) is making it very difficult indeed for FIs to adapt to meet the new demands of customers and regulators. For many of those operating in and around the financial sector, FIs are very much in survival mode, with little chance of new developments, replacing legacy systems much-needed innovation and modernisation.

1. With many financial institutions (FIs) continuing to operate legacy IT systems which are decades old, how pressing and problematic is the need to maintain or replace them?

Dan Griffith: For many financial institutions, it’s very important to maintain, and in some cases replace, legacy IT systems if they are going to deliver modern customer experiences and still adhere to regulations that are increasing and continually changing. However, for business continuity reasons, many legacy systems cannot be replaced. In these cases, a modern agile process framework is very helpful to connect legacy systems to the web portals and mobile applications that are key customer interfaces today.

When systems are replaced, FIs face challenges figuring out what to do with the data they maintain. Knowing what data to keep for compliance and business continuity requires an agile approach to application decommissioning.

2. How do aging legacy systems affect the ability of FIs to compete in an aggressive business environment? Are they compromising efficiency, agility and innovation?

Dan Griffith: Legacy systems adversely affect the ability of FIs to compete because they are too rigid and lack the ability to quickly change without massive coding and development overhaul. This rigidity does affect them adversely because without something additional (i.e. a modern agile process framework) they can’t change or innovate quickly, deeply affecting their ability to keep up with changing markets and growing customer expectations.

3. To what extent are these systems the result of patching together systems that were never intended to integrate?

Dan Griffith: Many of the systems in financial services reside in silos, often in different business divisions. This siloed environment makes it increasingly difficult to integrate systems without a modern agile process framework. Many of these systems were never intended to integrate, but that integration is now critical to customer experience success. Without integration, FIs don’t have a single view of the customers across products and services, and they can’t provide a consistent, seamless customer experience.

4. Are FIs reluctant to spend money on legacy systems due to a “if it ain’t broke don’t fix it” mentality? What are the cost and time implications of replacing legacy IT?

Dan Griffith: In today’s climate where IT budgets are shrinking, the adage “if it ain’t broke, don’t fix it” may be the prevailing mantra, but it often results in more costs and issues than replacement does. In some cases, they might put a new system in place, yet leave the legacy system running to maintain existing records. Storage costs, IT expertise and time-consuming coding changes all result in higher than expected costs for maintaining legacy systems. What is surprising to many is that retiring legacy systems and migrating data can be done quickly and result in bigger cost savings through an agile application decommissioning strategy.

5. What options are available to FIs to solve the legacy problem? Are there ready alternatives that are easier to use and deploy – for example, enterprise content management (ECM)?

Dan Griffith: There are a few options to solve the legacy problem. In cases where it isn’t feasible to replace a legacy system, FIs can introduce business process automation frameworks to connect these systems with modern interfaces such as portals and mobile applications. This approach enables FIs to keep data in their legacy systems yet make it accessible to modern customer experiences.

Another option is to migrate legacy systems to modern applications following an agile application decommissioning strategy. The key is to migrate only the data that is needed for compliance and business continuity and put in place a solution that will manage archived data appropriately, including its eventual defensive destruction.

6. Do you expect to see an uptick in the number of FIs replacing their legacy IT systems in the years to come? What steps should they take to incorporate this process into their long-term corporate strategy?

Dan Griffith: Yes, we expect to see more FIs replacing legacy systems for a variety of reasons. The key is to allow employee efficiency and customer experience drive the priority of a modernization strategy. A simple replacement strategy where you turn on the new system and turn off the old one is not possible for most organizations. Utilization of a business process automation framework can lead to quicker results by enabling access to some legacy systems in modern interfaces while migrating critical systems.

FIs also need to ensure they are preserving only the data necessary when they migrate to newer systems. An analyze, classify, migrate and manage approach to application decommissioning will ensure compliance is met and the right data is available in the new environment.

GDPR & You: Are You Ready for new European Data Protection Regulations?

There is a fundamental transformation underway. In the digital economy information is the currency of exchange. And, information knows no boundaries. Harmonization of regulations that fosters the free flow of information while strengthening privacy and security rights is an imperative for policy makers.

Take the EU and US trading block as an example. The total value of goods and services between the two largest trading blocks is estimated at $5.5 trillion employing 15 million. Cross border flows between the EU and the US are estimated to be 50% higher than any other trading block. 65% of US investment in information technology is in the EU.

Identity theft and impact of security and privacy breaches are impacting customer experience and customer loyalty negatively at increasing levels. They are also driving regulators to bolster data security and privacy legislation to impose stricter obligations on businesses and data controllers. Enter the new European Data Protection Regulation (EU GDPR).

As a response to advances in digital technologies such as big data, cloud computing and predictive analytics, coupled with revelations of bulk data collection and profiling by intelligence services the General Data Protection Regulation (GDPR) is a comprehensive overhaul of privacy legislation which considerably strengthens and expands privacy rights.

It spans more rigorous consent requirements data anonymization, the right to be forgotten and breach notification, which could lead to fines of up to €20 million or 4% of global annual turnover for the preceding financial year — whichever is the greater — being levied by data watchdogs. For other breaches, the authorities could impose fines on companies of up to €10m or 2% of global annual turnover — whichever is greater. For the average Fortune 500 company, that puts fines in the range of $800-900M.

In this new AIIM e-book (sponsored by Everteam) – Information Privacy and Security: GDPR is Just the Tip of the Iceberg, the focus is on five key questions that should be on every C-level executive’s list of priorities:

  1. How has the environment for information privacy and security changed?
  2. What is GDPR, why should you care, and what does it mean for your organization?
  3. What does “Privacy by Design” Mean?
  4. How will the Internet of Things make the privacy equation even more complicated?
  5. What should your organization do about all of this, and what role will machine learning play in solving the problem?

You can download your copy of the ebook here. And sign up for our newsletter to get more insights and guidance on GDPR and information governance straight to your inbox.

What is GDPR and how is it related to Information Governance?

Unless you have been in a cave in the past year, you must have heard about GDPR, the European General Data Protection Regulation. It is a comprehensive data privacy directive which takes effect on May 25th, 2018. The directive builds on the current EU Privacy Directive and unifies data protection laws in EU countries.

Note: This is the first in a series of posts on the subject of GDPR compliance.

GDPR at a high level

  1. Data Privacy is a fundamental right of “natural persons” (called Data Subjects which are essentially EU Citizens anywhere in the world and individuals located within EU jurisdictions).
  2. This right relates to Personal Data; any information exchanged between Data Subjects and Data Controllers (providers of products and services) and Data Processors (their outsourcers), information that can be traced back to the Data Subject:
    • Personal: name, gender, national ID, location, DOB, physical, genetic, psychological, mental, cultural, social characteristics, online computer identifiers, medical, financial, etc.
    • Organizational: recruitment, salary, performance, benefits, etc.
    • Other: race, ethnic, religious, political opinions, biometric, etc.
  3. These Privacy Rights state that you can ONLY collect Personal Data lawfully and for legitimate reasons, and you are limited to using it to what is necessary and what it was intended for:
    • Right for consent
    • Right to be forgotten
    • Right for rectification
    • Right for data portability
    • Right to object
    • Right for limited usage of collected data
    • Right to be notified about data breaches

It is worth noting that the above GDPR restrictions apply to Data Controllers and Data Processors even if they located outside EU jurisdictions (example a US-based cloud provider).

GDPR defined

Organizations found to be non-compliant can face significant fines amounting up to 20 million Euros (roughly US$ 23.5 Million) or 4% of global annual revenue, whichever is greater. Not small change.

If you do the math, a US$ 10B corporation found to be non-compliant may be fined US$ 400 million.

“Sky is falling” statements like this can often produce the reverse effect:

  • There are no signs that the GDPR Supervisory Authorities in the various EU countries will be trigger happy on May 25th, 2018.
  • Organizations are however advised NOT to take the matter lightly… GDPR is serious business… and violations will probably be handled firmly.

What does all this have to do with Information Governance?

A lot.

GDPR compliance is perhaps the compelling event that organizations have been “waiting for” in order to fully embrace the Information Governance culture. Common “values” that are delivered by effective Information Governance Programs go a long way towards facilitating GDPR compliance, such as:

  • Visibility through content (content analysis, classification, etc.)
  • Data and content minimization (elimination of ROT)
  • Systematic lifecycle management and controls over content

GDPR is a deep subject, and in upcoming posts I will dive a little deeper its various aspects, such as:

  • Definition and scope of Personal Data
  • Obligations of Data Controllers and Data Processors
  • Privacy Impact Assessments
  • Data Privacy Officer
  • Consent
  • The “vaulted” Right to be Forgotten
  • Right of Data Portability
  • “Privacy by Design” and “Privacy by Default”

Details to come.

Now that you found ROT, do you delete it or quarantine it?

In one of my last blogs, I covered the subject of Redundant, Obsolete and Trivial Content (ROT), and the need to perform remediation actions on it. As a reminder, ROT is superfluous content that is laying around in the infrastructure (file shares, SharePoint, etc.). It is content that is not needed and can be deleted.

The question is “is all ROT the same,” and if not, should the remediation action be tailored for each type?

Different types of ROT

There are indeed different types of ROT, for example:

  • Information Assets that are no longer needed
  • Duplicates in both Golden and non-Golden varieties
  • Files based on specific extensions known to contain non-content based information
  • Other – TBD

There are also different kinds of remediation actions you can apply (from most drastic to least drastic):

  • Delete there and then.
  • Move to a quarantine area and delete later (after a predefined period).
  • Quarantine in-place and delete later.
  • Do nothing. Just index it in a knowledge base and maintain awareness of its presence.

The question is how do you decide which remediation actions to apply to which type of ROT, and on what basis?

Defining remediation actions for ROT

The answer is “it depends” on your organization and its business drivers and priorities:

  • IT cost pressures
  • Cloud adoption strategy
  • Appetite for legal risk
  • Legal and compliance obligations, example GDPR
  • Willingness to part with content which one day may have future value
  • Other -TBD

The table below provides an example of such a mapping:

Types of ROT  Remediation Actions Delete There and Then (1)Move to Quarantine then delete Quarantine In-place then deleteDo Nothing (2)
[Information Assets not needed for business or any other purpose] AND [their lifecycle does not de-pend on an event] AND [are not responsive to any litigation] AND [any mix of the following…] • Not accessed for a specific period • Older than a certain age • Older than the farthest-reaching litigation XX (3)X
Golden duplicate copy (4)Submit to RM
Non-golden duplicate copyXX
Files with specific extensions (de-NIST)XX
Other (TBD)TBDTBDTBDTBD

Notes:

  1. This action may be appropriate if there is no reason to keep the ROT or if deleting it (in a legally defensible manner) reduces legal and regulatory exposure.
  2. Doing nothing may be appropriate in certain situations, for example when the source repository is about to be decommissioned.
  3. This action refers to moving the ROT to another storage location (on-premises or cloud) that is less-easily accessible by end-users (or perhaps not at all) and less expensive to maintain.
  4. In reality, this is not ROT.

The key technology to identify the ROT and perform the remediation actions is File Analysis. It is also a tool that is key to establishing visibility through Dark Content laying around in the infrastructure – something you need for compliance with privacy laws such as GDPR.

Note: Follow Bassam’s full series here.

Everteam Wins Award for Best InfoGov Company at InfoGovCon 17

Last week we attended, and happily sponsored, InfoGovCon 17. Providence, Rhode Island is a beautiful place to hang out for a few days and our team spent some time talking to event goers on all things related to information governance.

There is a lot of work to be done to improve how organizations manage their information. It starts with finding it all across all the different business systems where information is stored. It was clear from our discussions that file analytics is a critical capability needed. If you don’t know what information you have and where it’s located, how can you decide how to best manage it?

Of course, file analytics is only the beginning. Once you have that picture you need to start making some decisions. This is where the heart of information governance kicks in.

Bassam Zarkout did a presentation on File Shares Remediation, the process of determining what information you have and how to organize and deal with it. According to Bassam, there is so much content hidden away in file shares and other repositories, including cloud-based repositories, that organizations need to get a handle on it before it’s too late. (Think GDPR and Cyber Risk here). Check out his slides, they are packed with excellent insights!

The conference ended with a nice award for Everteam. We won the Best Information Governance Company of 2017 and we couldn’t be more proud. Our team works hard to design and develop the best solutions to support information governance strategies. We work with a great group of partners to deliver those solutions within the right framework for your company.

InfoGovCon Award

If you are interested in learning more about Everteam and our perspective on Information Governance, start with our Information Governance Overview or leave your contact info and we’ll get in touch.