File Analytics: Going Beyond Digital Archiving

Are you looking for a way to find your most engaging documents quickly? Are you trying to understand what information is stored and where? You can make sense of your files using the power of Big Data and Machine Learning. Purify autonomously and automatically the storage of duplicates, obsolete elements or decommissioned applications with File Analytics.

File Analytics solutions allow you to go further than “classic” File Sharing. File Sharing solutions focus on giving access to data, anywhere in the world and at any time. But what if you want to want to leverage this content in new ways which can benefit the company and employees?

File Sharing and its Challenges

While File Sharing tools are perfect for internal corporate communication and information flow, and it allows documents to be transmitted quickly and efficiently, but they have certain drawbacks that make them, in the long run , much less effective:

  • Documents can be difficult to find. The multiplication of duplicates, suboptimal classification (even based on criteria specific to each user), the presence of links shares; the result? File Sharing tools are certainly populated with relevant documents, but there are so many documents stored there that finding the one you need can take hours.
  • Not all documents are valid. A document in a File Sharing tool is not necessarily a relevant document. It can indeed be considered obsolete, present erroneous information, or even have been corrected multiple times since its was added to the share. The result? Your company may have, unknowingly, harmful elements in your file shares.
  • Access to data is not always secure. With File Sharing tools like shared drives, information security is compromised by the multiplicity of users and access points. Documents “escape” management and the IT department. And, from the outside, attacks can multiply! In addition, confidentiality is endangered because it’s nearly impossible to encrypt document content and control access to it.

It is not reasonable to do information governance with Shared Drives (Google Drive, Microsoft OneDrive, Dropbox, etc.). Their advantages, such as simplicity, are quickly eclipsed by their disadvantages. The most important is endangering your company’s confidential information.

File Analytics, for Dynamic File System Cleaning

File systems must be “cleared” of spurious documents (duplicate, obsolete, and so on) to keep them efficient. File Analytics tools offer the ability to analyze information and cleanse assets, in order to more easily control the life cycle of files. They also allow you to automatically analyze the contents of documents, with:

  • Extraction of named entities (places, people, functions …);
  • Extraction of company data (product code, customer number …);
  • Categorization by learning …
  • … or through a semantic repository.

From CRM to Email: Facilitating Data Management

In all sectors, companies capture, store and use large amounts of data. Whether we are talking about a CRM or an email client, content can be:

  • Structured: It is framed by specific references, which will allow a search engine to interpret and locate it more easily;
  • Semi-structured: Data that is not organized according to a given repository, but that integrates metadata or any other type of related information, facilitating their processing and exploitation;
  • Unstructured: Content that not subject to any repository and contains no associated information.

File analytics tools are intended to facilitate access to this data, regardless of their “quality” and their level of structure. They can also efficiently govern your corporate information, which is essential when you need to be responsive to the demands of customers, employees and partners.

File Analytics tools make access to quality data faster and more accurate. Want to know more? Connect with our team.

Content Services, Tools for Effective Information Governance

Effective information governance can not be achieved without the support of dedicated tools, called “Content Services”. Content Services make it possible to manage all data-related processes and ensure that employees find the right information no matter where they are. There are several types of services: archiving, dematerialization, enterprise content management solutions, automation process tools and GED. Let’s look at each one.

From ECM to Content Services

Content Services are not new, but rather an evolution of ECM (Enterprise Content Management). Why did we change the vocabulary?

Simply: new challenges, new vocabulary. The term “ECM” was found to be too limited, not encompassing all issues related to content management in business. Content Services (“content services”) better explain the role of content applications, offering quick and concrete solutions to business problems, delivered quickly and operationally, as quickly as possible. This is much more than how we traditionally think of ECM, so justifying the change of name is not just a matter of image


Archiving is not just about savingIt is exploiting the data in a new way, offer the data durability and security, as well as the ability to find that data from any device connected to the Internet. But there are several families of archiving:

  • Digital archiving is a set of actions that aim to identify, collect, classify, preserve, communicate and return electronic documents. Digital archiving can be used to satisfy legal obligations (from a few years to several decades, depending on the type of document), or information needs in the company (from one service to another, depending on the type of document, or one site to another for heritage purposes).
  • Mixed archiving is archiving between digital documents and paper documents. It is a practice that requires the definition of an archiving policy, with the creation or adaptation of a global document repository and the selection of a service offering adapted to the company. Such a project requires the setting up of a dedicated team, coordinated by a project manager who prioritizes the successive stages and facilitates decision-making.
  • Mass archiving. Often identified as the ideal archiving solution by IT departments, mass archiving makes it possible to import and preserve large amounts of electronic resources (documents, emails, videos, archives …). They have a certain advantage: their interoperability with the other blocks of the information system (messaging software, ERP, CRM, DM). This is what allows all applications of the company to file electronic documents, to consult them and to exploit them whatever the volumetrics.
  • Archiving with probative value. It has become a major issue for companies, because it’s not just about saving documents. Evidence-based archiving goes further, since it guarantees their authenticity (the documents were produced by those who claim to have produced them), their durability (they will always be accessible when they are needed) and their integrity (they have not been changed by a malicious person). The ultimate goal? That they can be used as irrefutable proof in case of dispute, whether with a client, an administration or another company.


Dematerialization happens when a company wants to replace the use of paper (or any other type of physical medium, such as a magnetic tape) with digital files stored on servers, on suitable media or computers. Many documents may be involved: invoices, administrative procedures, cash flow, etc…

There are two main types of dematerialization:

  • “Native dematerialization”, which consists of receiving all new documents in digital format. We then change processes and software.
  • “Duplicative dematerialization”, which consists of copying in digital format the documents initially received in paper format. Information governance consists of “retrieving” documents to integrate them into the dematerialisation process.

In both cases, all documents of the company can be made available to employees. Different profiles can be created to “limit” access to these documents for a particular service.


EDM, or Electronic Document Management makes it possible to optimize the management and exploitation of documents by specialized and efficient electronic means. It is based on software that will takes several actions on documents:

  • Capture
  • Acquisition
  • Digitization
  • Validation
  • Diffusion
  • Classification
  • Indexation
  • Archiving

Document Management, an integral part of information governance in the workplace, relies on process automation, allowing users to focus more on high value-added tasks. It also greatly reduces the risk of error, and forgetfulness.

Process Automation

In the workplace, the same causes often produce the same effects. However, these causes require employees to repeat a gesture, an action, a decision … at least, if we do not think “process automation” !

Process automation is the essence of Case Management. It combines document management, business processes and collaborative work within a single space. It ensures that all the documents involved in the company’s processes will be processed by the right player and the right service. It has many advantages.

Process automation:

  • Reduces the time required to process applications;
  • Increases productivity and flexibility;
  • Smooths workloads;
  • Is implemented easily and quickly at the heart of the process;
  • Provides a fast ROI and an overview of all activity thanks to dynamic dashboards;
  • Concerns many sectors of activity.

Information governance involves the adoption of smart tools, like Content Services, archiving, process automation, document management and more, that save time and efficiency while reducing the risk of errors. Download our Content Services ebook to learn what solutions for Information Governance Everteam provides.

The First Step for GDPR, File Share Remediation is the Assessment

Companies are starting to get very nervous. They know that GDPR is right around the corner, but they aren’t prepared. They aren’t completely sure how to get prepared, especially when the new regulations are applicable only to EU customers. These companies store a lot of data on their customers, and to make it even more challenging, much of it is stored in file shares, and cloud drives with no audit trail of what is where. It is a troubling situation. And it’s why an information assessment is so important.

Even if a company isn’t affected by the new EU privacy regulations, there is still the concern of the amount of confidential information stored in many locations across the enterprise putting the company at risk of data theft and exposure of confidential customer information.

A process without a plan

The time to act is now. But here’s the problem. Too many companies are implementing policies and procedures without a clear plan or little understanding of the information they are storing, and where. This is not a time to rush head down into implementing a process just to get something in place. Things will get missed. Money will be spent that might be better used elsewhere. Complicated processes may get put in place for information you shouldn’t store in the first place.

First – assess your current situation

Before you set out to adapt your current policies and procedures or add new ones, you need to know what information you have and where it’s located. With that clear view of your information in place, you can make better decisions on how to manage that information properly.

This is the assessment phase you must adopt to ensure successful governance of your information and adherence to regulations like GDPR (and you can expect will come in the future).

What does a proper assessment look like? It will differ depending on your company, but there are several key steps you should take.

Connect your information silos

Customer information is stored in any number of applications and repositories across the company. Some you know about, others you may not. As employees work with customers, they create content in the form of documents, emails and other content. They also pull files and store them locally to work on, whether that’s on their shared drive or in a cloud drive. They may even create copies and store those on their shared drives.

It’s a typical scenario for many companies. Your first step is to find where all this information is stored and connect it so that you get a 360-degree view. It’s safe, at this point, to say you will need some kind of file analytics solution that can connect to all types of applications and repositories to give you that single view. The key is to leverage a file analytics solution that can connect both structured (application) data and unstructured information (documents, emails, etc..).

Analyze what information you have

Now that you have that 360-degree view, you need to know what information you have and where it’s located. Your file analytics solution plays an important role at this point. It will analyze your information, extract its associated metadata and automatically classify it.

There are different levels of classification depending on how much your file analytics solution is capable of doing. Surface-level classification classifies information according to metadata such as date created, created by, last time accessed, format, language, named attributes and other high-level classifications.

Deeper scans look into the content itself and enable you to recognize personally identifiable information such as names, account numbers, addresses, credit card numbers, and more. This level of analysis is supported through defined taxonomies and ontologies, dictionaries, pattern extractions or a semantic repository. Machine learning can also help automate the classification process, learning and improving classification as more information is analyzed.

Identify ROT (redundant, trivial and obsolete)

Now it’s time to clean house. Identify what information is redundant – you don’t want to keep copies, what is trivial – it’s not critical to your work today, and what is obsolete – you no longer need it.

Bassam Zarkout defined ROT well:

Organizations may have different definitions what is and what is not ROT, but in a nutshell, it is as follows:

  •       Any content found to be responsive to litigation and ediscovery situations (ESI) is not ROT (by definition).
  •       Of what is left, ROT is content that is not needed for business, not needed for compliance reasons, not accessed for a long time, is an exact or a near duplicate, etc

ROT is information you don’t need, and in many cases, you can simply delete. But not all ROT is the same, so you need to think about this information and what you want to do with it. Again, Bassam provides some guidelines to help in this post.

Do this step before you start applying new policies and procedures so that you aren’t wasting time on information you don’t need, and you can ensure its defensible destruction.

Keep in mind that with GDPR, you should only be storing customer information you need to provide services and support to the customer. So if you have a lot of information about the customer that provides no value to how you support them, destroy it.

Taking care of the rest

You’ve cleaned out your information. Now you can start thinking about how to deal with the rest of it. If it’s not information you need for running the business today, but you are required to keep it for compliance and other legal reasons, consider archiving it.

Archiving lets you manage the information properly, yet reduce storage costs by placing that information in less costly storage locations. Proper archiving also allows you to retrieve that information quickly if it’s needed for some business opportunity or legal dispute.

At this point, you are ready to apply your policies and procedures to your business information. These policies may relate to GDPR, or they may relate to compliance and other legal regulations. The key is to make sure the information you need to manage is properly managed on an ongoing basis.

Focus your efforts for success

While the assessment phase is always your first step to successful information governance, it’s not a one-time effort.

Regularly assessing your information across the enterprise is critical to ensure you are managing all your information properly.

It helps you:

  • Identify ROT and remove it regularly.
  • Find situations of non-compliance, so you can deal with them before things get out of hand.
  • Mitigate risks related to exposure of confidential information, including PII and PCI.
  • Better manage storage but moving information to storage facilities based on its importance.

Learn how Everteam File Analytics can help you get a handle on all your information, regardless of where it is stored. Download the guide today.

Making the Case for Information Governance: 4 Key Use Cases

The amount of content you store – structured and unstructured – shows no signs of slowing down. You are inundated with information, some which you need to keep, some you don’t. Enterprise content management helps you manage that information, but you also need an underlying information governance model and supporting technology to help you figure out not only what information you have and where, but what to do with it. 

Information Governance is not a one-time project

The biggest mistake that many make is assuming information governance is a one-time project. Set up some policies and procedures and your plan is in place. It’s not that simple. But it also doesn’t have to be complex. The best way to think of information governance is as an umbrella term that supports a number of use cases that require some type of activity around your information.

Or think of it this way:

Information governance is a set of problems and a set of solutions you take on to solve those problems.

Four common use cases for information governance

When you break information governance down into a series of projects or “mini-strategies”, it’s much easier to ensure your information is well managed going forward. Here are four of the most common use cases (or projects):

  • Records Management: A system for the collection, indexing and analysis of records produced anywhere – and by any system – in your organization.
  • File Analytics: Cross-repository inventory and analysis of content to uncover compliance deviation and execute policies to drive bulk actions to delete ROT and quarantine PII. This is the most common use case we discuss with customers today.
  • Application Archiving: Offload inactive content from production applications to reduce costs, increase compliance and rationalize infrastructure.
  • Application Decommissioning: Capturing and archiving content from systems to ensure ability to retrieve and report after decommissioning of the source system.

Making the case for information governance

It’s very hard to get information governance approved. Getting to the top of the list for any of the projects listed above is a real struggle. To help you prove how important these types of projects are, here are some points you can focus attention on:

  1. Meeting regulatory compliance requirements – new compliance requirements happen regularly and it’s costly to fall out of compliance.
  2. Reduce legal exposure – you put yourself at great legal risk when you store information in shared folders or cloud-based applications like Office 365 or Dropbox that you shouldn’t have there. Things like credit card numbers and other personally identifiable information.
  3. Reduce data theft exposure – same issue for data theft. Never assume you won’t get hacked. Assume you will, it’s just a matter of when. If you are a company that stores everything forever, the surface area you make available for data theft is enormous.
  4. Reduce storage and license costs – Traditional ROI analysis also make a great case for an information governance project. Reducing storage costs is one; reducing license costs, operating costs and other unnecessary expenses are also important to point out.
  5. Eliminate costs associated with obsolete systems – you’re ready to move to a new application or you no longer need an application – you don’t want to keep these systems around just to store the content they currently manage. What you need to do is archive the information you need to keep and then decommission these systems reducing costs.
  6. Reduce architectural complexity – this is particularly important with companies that regularly acquire other companies and end up with multiple systems to manage. An IG strategy would give these companies an organized way to figure out what systems and information they need; how to best archive information they must keep and decommission apps no longer required.

Success factors for information governance initiatives

You won’t be successful if you try to do everything at once. The complexity will kills any small successes you have. What works best is to take on an information governance solution that has a clear beginning and end, a clear scope and a clear success factors. For example, don’t say you are going to set up records management for the entire enterprise; say you are going to set up records management for a specific department and a specific type of content. Another example is to pick an obsolete application that costs a lot of money to maintain and set up a project to archive its content (or destroy it) and shut down the application.

In other words your IG strategy will find success if you think in terms of:

  • A tactical initiative in service of a long-term strategy
  • An effective solution design that fits an enterprise strategy
  • Specific near term objective that defines the scope of the initiative
  • Defined justification: Compliance, Cost or Business enablement
  • Scope defined by content types or use cases
  • Effective executive sponsorship

Ken Lownie, Everteam VP Operations recently spoke about information governance and how it fits into the overall structure of enterprise content management in the KMWorld webinar: The Future of Enterprise Content Management. If you’d like to listen to his full discussion, you can watch the replay on demand here.

GDPR and insurance companies: what will change?

The General Data Protection Regulation (GDPR) will come into force on May 25th. In the meantime, companies in the insurance sector, like the others, must comply with its new requirements ensuring that organizations are properly managing the confidentiality of the information they have transferred or collected from European citizens. But what will change? What advice does the CNIL, a reference organization in France regarding the application of the GDPR, give to insurance organizations that even US companies can apply?

A “compliance pack” called to evolve

By May 25, 2018, the enforcement date for GDPR, the CNIL has planned to update (and propose new) its compliance packages. First affected is the insurance sector. It must be said that insurance companies collect a considerable amount of data every year, which allow them to create personalized offers, adjust tariffs, or follow the evolution of the market and consumer needs.

The insurance compliance package proposed by the CNIL must therefore be enriched soon with a GDPR side, in addition to the reminder of the standards to which these companies are subject. Still, it is possible, by studying the texts of the new General Regulations on Data Protection, to outline the contours even more.

Remember: the rights of your customers

Let’s start with a quick reminder: what are the rights granted to your customers by the GDPR? The most important are undoubtedly the following ones. These are the ones that will require a whole new approach to information governance in the insurance industry:

  • The right of access to the data
  • The right to be informed about the processing of the data used
  • The right of rectification
  • The right of opposition
  • The right to portability of data, in some cases (we’ll talk about this again)
  • The right to be forgotten

All of these rights, such as the right of access to data for example, are not fundamentally new; most are already registered in the Data Protection Act of 1978. Those that already existed are nevertheless strengthened, reaffirmed and harmonized at European level.

Thus, in the insurance sector, it is essential to master (and be able to communicate) the following information: the personal data recorded, their provenance, the names and roles of the persons authorized to use them, the purpose and use of the data as well as their location, and who has access to that data. Article 18 of the GDPR allows any holder, past or current, of an insurance contract the right to receive a copy of his personal data, all in a common format and easily readable.

Insurance: how to be in compliance with the GDPR?

As an insurance company, you can not take the risk of not being in compliance with the requirements of the GDPR. To comply is to avoid a commercial risk (a sanction could have unfortunate consequences in terms of images and reputation) as well as a significant financial pitfall – the fines can go up to 20 000 000 € (US $23 million plus) , or 4% of the annual global turnover (of the two, the highest amount will be retained!).

Therefore, the first step to comply with the GDPR is to appoint a DPO, for Data Protection Officer (Delegate for Data Protection). Its mission will be to ensure that the law is respected and that processes are put in place to enhance the transparency of your company. In particular, he will have to make sure that you will be able, as of next May, to:

  • To group all the exchanges with the customers, whatever the points of contact used by them (mail, telephone, mail, passage in agency …) within the same document
  • To demonstrate that your customers have consented to the use of their personal data
  • To clarify, in the case of institutional control and at the request of customers, the use made of personal data
  • To set up information governance, based on documentary traceability, storage security and responsiveness

What the CNIL recommends

The work required to get GDPR compliant must be implemented gradually. Thus, the CNIL recommends for insurance, as for other companies, to carry out 4 main operations.

  1. First, an organizational component, with the designation of the DPO and its hierarchical position, and the setting up of steering committees.
  2. Then, a site “risks and internal controls”, allowing you to take stock of the current practices and the elements to be corrected.
  3. It should be followed by the deployment of information governance tools (access, traceability, security, communication…).
  4. Finally, an awareness step, internally and externally, on the new governance of information, will have to complete the implementation of the GDPR in the insurance sector.

Compliance with GDPR is not optional for companies in the insurance industry. If you’re looking for help figuring out what you need to do, give us a call.


7 Reasons Legacy ECM should be replaced – Your Data Migration Strategy Simplified

Shifting to a Modern Enterprise Content Management System

Current Situation…

Setting a data migration strategy is vital as there are many challenges with legacy systems. With Everteam, your data migration strategy is simplified.

One of the most difficult challenges CIO’s are facing is maintaining and upgrading legacy systems like (FileNet P8 and Content Manager 5.2.1, LaserFiche 8 and previous versions, Documentum 6.5 etc…). While technology continues to evolve, the business value of legacy systems weakens as enduring with legacy systems brings with it countless disadvantages that can do tangible damage to your company. Here is how; Legacy IT systems are no longer prepared for change as software editors have discontinued support to those systems. This also means that your company will be paying extremely high maintenance costs. With all those costs escalating, security threats also increase as legacy systems make security worse and not better because of their age especially that installing upgrades and patches are no longer enabled. This consequently affects performance and meeting customer terms becomes impossible. New generations use new technology. This new technology is able to keep up with volume and performance unlike legacy systems which have restricted update features. We also have new generation employees which are also more familiar with latest technologies. Imagine the difficulty in finding someone with the knowledge and technical skills for legacy systems.

Urgent Need for a Data Migration Strategy

Real Issues with Legacy Systems that WILL Damage Your Organization…

Let us see why does it make sense to migrate old legacy systems before they continue to hold your company back and how  your data migration strategy is simplified.

 1) Discontinued support from software editors                        

Many in the IT industry are focusing on improving the standard of operating systems used in organizations, IT professionals refuse to further support legacy systems and are instead forcing their clients to upgrade.  The older the application gets the more difficult it becomes to quire support. For example, IBM support for older software and hardware for version 05.02.01 (IBM official support) will be discontinued in September, 2018.

 2) Performance and Security-related threats

Legacy Systems are unable to keep up with volume, performance and high availability. This means an increase in manual labor. Consequently, there will be a higher potential in bottlenecks and other inefficiencies. More time will be spent trying to understand high volumes of data instead of focusing on real tasks at hand that could actually affect employee performance and efficiency positively. This also creates a higher risk of data loss and security related threats, without the ability to install upgrades and patches.

3) Higher Costs

Probably the most obvious disadvantage in legacy systems. Not only is the cost for maintaining old systems high, but there are other costs which make legacy systems expensive.. These costs include; hiring specialists familiar with legacy stems, support engineer pay rates as most engineers are not familiar with legacy systems. Specific IT environment and hardware needed to run the solution as legacy systems cannot be installed or run on any existing environment. Finally, since legacy systems are based on old technology,  these systems won’t be able to supports their company’s constant evolving needs,  resulting in increased costs for expanding the solution, installing upgrades developing new features, deploying new solutions or integrating with other systems..

4) Difficulty in Finding Experienced Labor

 There is no doubt the legacy systems are based on antiquated technologies. Therefore there has become a lack of technical knowledge needed to operate and maintain such systems. New generation employees are more familiar with latest technologies.   It is very challenging and tough for companies to find expertise who work with legacy systems and know how to operate them. Instead you can easily find IT support personnel already familiar with the latest new generation technologies and databases, saving time, effort, and costs related to maintaining old-fashioned technical skills.

5) Client Differentiated Versions

In this section, we are going to address our experience in the Middle East specifically. As there are country specific standards for data and information exchange especially when dealing with government entities, set by each country in the Gulf (for example YESSER in Saudi Arabia, MOTC standards in Qatar etc.) these requirements cannot be disregarded when putting in place a large-scale enterprise content management implementation. One of the many challenges faced is the involvement of software integrators (i.e implementers) in most implementations happening in our region, due to the fact that software vendors are not physically present in those regions. Therefore, and also due to the customers’ ever-changing functional and technical requirements, the customer ends up with a custom-developed solution where a different version is installed at each client.  Upgrade in those cases becomes difficult if not impossible… This eventually leads to discontinued versions in addition to discontinued application support.

6) User Adoption

User adoption of the system is almost always a necessary ailment for the success of the project. With legacy systems there are many challenges related to user adoption, particularly for newer end-users. Reason being that the newer end users are not familiar with legacy systems. If the company chose to train those employees, it will be very demotivating to train a new generation on an old system while everyone else is evolving.

7) Lacking Customer & User Experience

The only fashioned legacy systems are not familiar with all interfaces customers use now-a-days. Examples include tablets, smart phones, and laptops without forgetting web-based user interface. Consequently performance is affected creating a negative user experience with a deteriorated document viewing and annotation. This affects the overall business performance and innovation.

8) Migration with Everteam…

Everteam has built a very good reputation over its 25 years’ experience in the field with user experience at the center of all implementations. Migrating to an Everteam Solution will provide you many competitive advantages that would make you wish you migrated a long time ago. All our clients have the same solution version customized to fit their individual business needs. Our solutions which are proven solutions worldwide are well in sync with the latest technologies in the field of ECM. With an average of two months implementation time frame, your organization can will get a modern browser-based interface, with improved performance and user experience. Everteam has offices all over the MENA region to offer customers a direct level of support sparing them the hassle of dealing with resellers who tend to make the experience unpleasant.

But wait, there is a simpler way! With Everteam, you don’t have to migrate, you can simply archive your legacy!

With Everteam, setting a data migration strategy is facilitated. A simpler approach to data migration is data archiving.  Instead of migrating older unused data you can instead archive it  so that it is kept and referenced in the current system. This is suitable in cases where older data is important to the organization and may be needed for future reference, or data that needs to be retained for regulatory compliance. Archiving data is simply a win-win scenario using a pre-determined set of business requirements to move data and or documents from legacy systems to a cheaper, secure and accessible storage. This journey is very successful and very simplified. Think of It as three main stages. 1) Capture data using our standard connectors and upcoming ones, 2) manage data whether by classifying it, applying retention rules and destroying it based on pre-defined workflows 3) finally store records while making them available across all search points within the organization for descriptive and predictive analytics.

I Want To Migrate With Everteam!

Blockchain Technology

Redefining the Future of e-Services?

We have noticed many changes over the past decade in the digital pattern of the internet and many developments have happened. One of the latest and most repeated over the tips of tongues of CEOs and CTOs, startup entrepreneurs, and even governance activists, is blockchain technology. While some of you are slightly familiar with blockchain, there remains a wide majority who could have heard about it without really understanding how this new technology will redefine the future of internet transactions.

The internet became a tool for us to decentralize our information, it allowed us to interact with anyone by sending any piece of information in less than a second. Yet, with all the advancements the cyber world brought with it, it still had the gloomy and risky side to it as any piece of digital information is at risk. Online store owners remain to put themselves and customers at jeopardy for payment fraud. This is due to insufficient internet safety.

The simplest example of online transaction, is online payment. When you buy a specific item online, the transaction of taking money from the bank involves many intermediaries and all of those intermediaries take a transaction fee which makes it very costly. In the real world however, you do not need an intermediary that needs to check the money and transfer it. Now, you didn’t think I would be sharing with you all this without giving you some good news towards the end right? Today through blockchain, we can send money like we are sending an email. Blockchain was invented to create the alternative invention to currency which is known as “Bitcoin” which may also be used for voting systems, online signatures and many other applications. To understand bitcoins, we need to understand what a blockchain is first.

So, what is Blockchain Technology?

Blockchain stores information across a network of personal computers allowing the information to be disseminated, therefore, no central company owns a system which allows the protection of the integrity of each piece of digital information. This different approach to storing information, is suitable for environments that have high security requirements and value exchange transactions as no single person is allowed to alter any record. It does not only allow us to create safe money online, but it allows us to protect any piece of digital information, examples include contracts, online identity cards and so on. So what is bitcoin again? Bitcoin is a form of digital cash (cryptocurrency) which can be sent to anyone across the internet.  Using bitcoin means there is no intermediary involved.  The verification of the transaction happens with a network of different people all over the globe who help by validating other people’s bitcoin transactions. Blockchain tracks records over this digital cash to validate that only one person is the owner of it.

Just the Beginning of Blockchain Technology

Blockchain technology brings with it so many advantages and this is only the beginning. Blockchain has proved to cut costs, ensure cybersecurity, empower users, reduce clutter of multiple ledgers, and most importantly prevent transaction tampering. All this is just the beginning to such innovation. This is just the tip of the iceberg as they say, since blockchain technology is predicted to develop so much that soon we will be able to protect our online identity, and track many devices on internet of things. Blockchain has in fact extended its reach beyond the alternative payment system to revolutionize the entire IT world. For example a refrigerator which is connected to the internet whilst being equid with sensors could eventually use blockchain to manage automated interactions with the external world. These interactions may include anything from ordering and paying for food to arranging for its own software upgrades and tracking its warranty.

Blockchain Technology and Data Management platforms…

Blockchain technology has shown that it is not only convenient for financial transactions but also for other sectors that deal with Information and Data such as the public sector… Blockchain technology can simplify the management of information; information is managed in a secure infrastructure due to its decentralized nature, giving blockchain leverage over other digital technologies.

These sectors value privacy therefore managing confidential information becomes very critical when it comes to information security. Public and government organizations have their doubts on storing their data on cloud as they have specific needs like storing data within their borders for security and political reasons. The type of data used and shared within those organizations includes highly confidential records such as birth & death certificates, marital status certifications, licensing for businesses, criminal records and even property transfers. As a consequence, though organizations in the public sector are electronically managing their data some records still remain in hard copy, forcing people’s onsite presence to complete record-related transactions

The strength of blockchain comes from the way it was built as series of blocks that record data in hash functions with timestamps so that data cannot be tampered with. This gives government organizations the guarantee for secure data storage that cannot be manipulated or hacked, leading to improved management of information in the public sector paving the way again for fully smart and secured cities and environments.

There is no doubt that the blockchain technology has gained a wide share in the marketplace and everyone is questioning whether ECM providers (such as Everteam) will be combining blockchain into their solutions. You can rest assure that we will be integrating blockchain technology into our solution, as a matter of fact our blockchain connectors are predicted to be released in late 2018, so stay tuned! Subscribe to our newsletter HERE.



Who are the Data Controllers and Data Processors in GDPR?

In my last Blog, I talked about the definition of Personal Data and the various data protection actions that Data Controllers and Data Processors made apply to this Personal Data (Anonymize, Pseudonymize and Minimize).

But who are these Data Controllers and Data Processors?

These are the parties that capture, process and store Personal Data belonging to Data Subjects. Under the GDPR Regulation, these parties have obligations to protect the Personal Data of these Data Subjects.

Data Controllers/Data Processors

Data Controllers

This is “the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data; where the purposes and means of such processing are determined by Union or Member State law”;

In plain English, this is the party (individual, entity or authority) with which the Data Subject exchanges his or her Personal Data to receive the goods and services.

The GDPR Regulation imposes a range of data protection obligations on the Data Controller, including:

  • Restrict the scope of data that can be collected and the duration of retention of this data
  • Seek and obtain the consent of the Data Subject BEFORE the Personal Data is captured
  • Once received, protect this data
  • Notify data controllers if/when a data breach occurs
  • Appoint a Data Protection Officer or DPO (under certain conditions) – covered in a future blog

Data Processors

Similarly, the Data Processor is “the natural or legal person, public authority, agency or other body which pro-cesses personal data on behalf of the controller.”

This is the party that performs part or all of the processes on behalf of the Data Controller. One of the game changers with GDPR is that Data Processors also have obligations under that regulation and that these obligations also apply even to Data Processors located outside EU jurisdictions, example a US-based cloud provider performing data processes on behalf of an EU-based Data Controller located within the EU:

  • Must implement specific organization and technical data security measures
  • Keep detailed records of their processing activities
  • Appoint a Data Protection Officer or DPO (under certain conditions)
  • Notify data controllers if/when a data breach occurs

In view of these GDPR obligations, Data Controllers must do more diligence to the processes by which they select new Data Processors and re-qualify existing ones.

Data Controllers must also determine whether they fall under the GDPR Regulation and identify their responsibilities and measures they must implement vis-à-vis the Personal Data they process.

Lots more to talk about here, but suffice it to say that organizations that fit the definitions of Data Controllers and Data Processors should assess their GDPR-related Data Protection obligations and implement measures and technology-based solutions to enable and enact their compliance.

I will cover further aspects of the GDPR Regulation in upcoming blogs, namely the rights of Data Subjects.

Bassam Zarkout

Personal Data in GDPR and How You Can Deal With It

In my last blog, I made a general intro of the EU General Data Protection Regulation (GDPR), the upcoming directive for data privacy due to come into effect on May 25th, 2018. GDPR grants broad rights to Data Subjects over the way their “Personal Data” is handled. It places obligations on “Data Controllers” and “Data Processors” to protect the Personal Data of “Data Subjects.”

In this blog, I will focus on the topic of “Personal Data.”

GDPR Chapter 1 Article 4 defines “Personal Data” as

“any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person”.

  • Data: stored information
  • Personal: the information relates to an identified or identifiable “natural” person – meaning the identification of the person (an individual) is possible using the data

The GDPR definition of Personal Data is wider in scope than commonly used terms like PII (Personally Identifiable Information), PHI (Personal Health Information), and PCI (Payment Card Industry). In fact, Personal Data can relate to any mix of the following:

  • Personal: name, gender, national ID, social security number, location, date of birth
  • Physical, genetic, psychological, mental, cultural, social characteristics, race, ethnic, religious, political opinions, biometric, etc.
  • Online computer identifiers
  • Medical, financial, etc.
  • Organizational: recruitment, salary, performance, benefits, etc.
  • Other

It is worth noting that GDPR does not apply to deceased persons. However, their data “may” be deemed personal for their descendants if this data gives hereditary information. Also, the “identifiability” of a Data Subject is a moving target because it depends on his or her circumstances.

There are three important terms to learn about regarding Personal Data in GDPR:


Anonymize Personal Data

Data Controllers and Processors can protect Personal Data by anonymizing it. This is the permanent modification of Personal Data in such a manner (randomize or generalize) that it cannot be attributed back to the Data Subject. It is also an irreversible process, meaning that the data cannot be restored back to its original identifiable form. Anonymized data is not subject to GDPR restrictions.

Pseudonymize Personal Data

Data Controllers and Processors can pseudonymize personal data by processing it in such a manner that “it can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person”.

This approach:

  • Carries a higher risk than anonymization and requires technical and procedural controls.
  • Strikes a better balance between the interests of Data Subjects and those of Data Controllers/Processors.
  • Pseudonymized data is subject to GDPR controls since Personal Data can be re-identified from it.

Minimize Personal Data

The GDPR states that Personal Data should be “adequate, relevant and limited to what is necessary for the purposes for which they are processed. This requires, in particular, ensuring that the period for which the personal data are stored is limited to a strict minimum. Personal data should be processed only if the purpose of the processing could not reasonably be fulfilled by other means.

The word “necessary” is critical here. It means that the Data Controllers and Processors can only collect data that is necessary for the purpose of the transaction with the Data Subject. They can also retain this data for a strict minimum period.

In my coming blogs, I will cover the various rights of Data Subject vis-à-vis Personal Data, for example:

  • Right to consent
  • Right to be forgotten
  • Right for rectification
  • Right for data portability
  • Right to object
  • Right for limited usage of collected data
  • Right to be notified of data breaches

Bassam Zarkout

Subscribe to the InfoGov Insights newsletter to stay up to date on things related to Information Governance.

An Interview: Modernization and the Legacy Systems Headache

Our own Dan Griffiths was approached this past summer to provide his insights into the legacy system headaches that organizations face today. You can read the full piece on the website in their August issue, or you can read on to hear Dan’s views on modernization.

Although they may well have been considered state-of-the-art in their day, the IT currently being utilised by many financial institutions (FIs) is several decades old – legacy systems that are pretty much creaking at the seams. Without doubt, the continued use of such outdated systems (often large and cumbersome IT infrastructures) is making it very difficult indeed for FIs to adapt to meet the new demands of customers and regulators. For many of those operating in and around the financial sector, FIs are very much in survival mode, with little chance of new developments, replacing legacy systems much-needed innovation and modernisation.

1. With many financial institutions (FIs) continuing to operate legacy IT systems which are decades old, how pressing and problematic is the need to maintain or replace them?

Dan Griffith: For many financial institutions, it’s very important to maintain, and in some cases replace, legacy IT systems if they are going to deliver modern customer experiences and still adhere to regulations that are increasing and continually changing. However, for business continuity reasons, many legacy systems cannot be replaced. In these cases, a modern agile process framework is very helpful to connect legacy systems to the web portals and mobile applications that are key customer interfaces today.

When systems are replaced, FIs face challenges figuring out what to do with the data they maintain. Knowing what data to keep for compliance and business continuity requires an agile approach to application decommissioning.

2. How do aging legacy systems affect the ability of FIs to compete in an aggressive business environment? Are they compromising efficiency, agility and innovation?

Dan Griffith: Legacy systems adversely affect the ability of FIs to compete because they are too rigid and lack the ability to quickly change without massive coding and development overhaul. This rigidity does affect them adversely because without something additional (i.e. a modern agile process framework) they can’t change or innovate quickly, deeply affecting their ability to keep up with changing markets and growing customer expectations.

3. To what extent are these systems the result of patching together systems that were never intended to integrate?

Dan Griffith: Many of the systems in financial services reside in silos, often in different business divisions. This siloed environment makes it increasingly difficult to integrate systems without a modern agile process framework. Many of these systems were never intended to integrate, but that integration is now critical to customer experience success. Without integration, FIs don’t have a single view of the customers across products and services, and they can’t provide a consistent, seamless customer experience.

4. Are FIs reluctant to spend money on legacy systems due to a “if it ain’t broke don’t fix it” mentality? What are the cost and time implications of replacing legacy IT?

Dan Griffith: In today’s climate where IT budgets are shrinking, the adage “if it ain’t broke, don’t fix it” may be the prevailing mantra, but it often results in more costs and issues than replacement does. In some cases, they might put a new system in place, yet leave the legacy system running to maintain existing records. Storage costs, IT expertise and time-consuming coding changes all result in higher than expected costs for maintaining legacy systems. What is surprising to many is that retiring legacy systems and migrating data can be done quickly and result in bigger cost savings through an agile application decommissioning strategy.

5. What options are available to FIs to solve the legacy problem? Are there ready alternatives that are easier to use and deploy – for example, enterprise content management (ECM)?

Dan Griffith: There are a few options to solve the legacy problem. In cases where it isn’t feasible to replace a legacy system, FIs can introduce business process automation frameworks to connect these systems with modern interfaces such as portals and mobile applications. This approach enables FIs to keep data in their legacy systems yet make it accessible to modern customer experiences.

Another option is to migrate legacy systems to modern applications following an agile application decommissioning strategy. The key is to migrate only the data that is needed for compliance and business continuity and put in place a solution that will manage archived data appropriately, including its eventual defensive destruction.

6. Do you expect to see an uptick in the number of FIs replacing their legacy IT systems in the years to come? What steps should they take to incorporate this process into their long-term corporate strategy?

Dan Griffith: Yes, we expect to see more FIs replacing legacy systems for a variety of reasons. The key is to allow employee efficiency and customer experience drive the priority of a modernization strategy. A simple replacement strategy where you turn on the new system and turn off the old one is not possible for most organizations. Utilization of a business process automation framework can lead to quicker results by enabling access to some legacy systems in modern interfaces while migrating critical systems.

FIs also need to ensure they are preserving only the data necessary when they migrate to newer systems. An analyze, classify, migrate and manage approach to application decommissioning will ensure compliance is met and the right data is available in the new environment.