Better Supplier Relationships Start With Supplier Information Management (SIM)

Start with SIM >

The Causes Of Bad Data, And How To Fix Them

The Causes Of Bad Data, And How To Fix Them

Bad quality data is costing organizations a lot of money. Gartner research has found that organizations believe poor data quality to be responsible for an average of $15 million per year in losses while 94% of businesses believe the data they hold is inaccurate. Entrepreneur reports that businesses lose 30% or more revenue due to bad data. Aside from such sobering figures, bad quality data comes with hidden costs as well. A recent HICX report meanwhile revealed that, despite all these issues, fixing data is still not high on the list of priorities for many organizations as the perceived ROI is very low.

This article will explore:

What are the causes of bad data?

The points outlined below are not limited to all the causes of bad data. However, they are the most common and often cause the biggest issues:

  1. Misjudgment. One of the biggest misconceptions organizations have around their existing data is that it is clean and accurate. They thereby disregard the potentially damaging impact that bad data is already having within their enterprise applications. The reality is, as more information enters the enterprise, from different sources, the likelihood of mistakes occurring automatically increases.
  2. Siloed information. Departmental silos are created once different parts of an organization become completely isolated from one another. They cause issues as they provoke the use of data definitions that are different between departments. This makes it difficult to manage and reconcile essential data between separate teams and applications and they are also notoriously hard to break down.
  3. Lack of data governance. While human error can unfortunately never be eradicated completely, it is a factor which must be accounted for, as typos, and incorrect and/or incomplete information are common. Putting clear data governance measures in place is a way of tackling human fallibility, as the aim of governance is to create approval processes which will catch such errors early in the data life cycle.
  4. No single point of entry for the data. Duplicate data is a logical outcome of departmental silos and human error. Every organization is prone to duplicate data, which can have a serious effect on the accuracy of reporting and analysis. It also makes data cleansing difficult and expensive, hence the need to tackle the issue right at the point of entry for the data into the organization.

What does bad quality data look like & how to recognize it?

Before data quality issues can be solved, an organization must first understand what bad data might look like for them. This means communicating clearly, across all seniority levels and silos, the following warning signs to look out for:

  1. Incompleteness: Crucial pieces of information are missing with fields left empty
  2. Inaccuracy: All the data exists (the data fields are filled in), however, they may be in the wrong field, spelled incorrectly, or are inaccurate. Arguably the hardest type of bad data to identify as the issue often lies in subtle errors which are hard to detect without prior knowledge of data specifics, such as names, addresses, phone numbers, payment details, etc.
  3. Inconsistency: Although data may technically be correct, it is not presented in the same format or value (e.g., using different currencies instead of the same one throughout, phone numbers with or without a dialing code, names written as an initial rather than fully).
  4. Invalidity: Data fields are complete, however, said data cannot be correct in such context (e.g., ‘units available’ displaying a negative value)
  5. Redundancy: where the same data is entered multiple times but expressed in slightly different ways (e.g. entering the same company but with different names, entering a person’s name in different ways, etc)
  6. Non-standard data: Data is in non-standard formats or formats which cannot be processed by the system (e.g., ‘percentage’ rather than %)

How can you fix your organisation’s bad data?

Fixing the data is not as simple as merely cleaning and changing the information itself. The root of the problem lies in the organization’s mindset, attitude, and overall approach towards the way the data is captured and stored.

  1. Zero tolerance for new bad data. An analogy of cleaning the ocean is a good example to demonstrate this. If all the efforts are focused on cleaning plastic (bad data) from the ocean the job will never be completed as new plastic will enter the ocean and replace it at a faster rate than it can be removed. This is because the root of the problem is not fixed, which is to actually prevent the pollution from taking place at all, or, in other words, ensuring only good data enters the system. To this end, a single-entry point, or a supplier portal, is beneficial, as the flow of the data into the enterprise can be controlled and put through a quality measure process from the beginning.
  2. Continuous improvement. Businesses must adopt the notion of data cleansing being an ongoing process, rather than a one-off project. To effectively maintain the quality of data for the long-term, a data governance team must be established to define processes and policies, as well as metrics that allow tracking of changes within the data over time.
  3. Alignment. Assessing the key priorities for each organization and ensuring the ongoing alignment of all parts of the business are important. Organizations must be able to answer questions such as, ‘Which applications are most important to use, or used the most? Which data elements are of utmost significance (i.e. data that we cannot function without)?’ and then ensure availability of that data to those systems in a consistent and accurate manner.

To learn more on the importance of governance and related frameworks, please see the following resources.

A case for Supplier Experience Management (SXM)

A recent survey by HICX on Supplier Experience revealed that suppliers would be willing to go the extra mile for enterprises which bring them value and overall good experience. Currently, suppliers are faced with multiple supplier facing portals; however, if the amount of the portals were reduced, they would be encouraged to engage more frequently, which would result in better quality of data.

Supplier experience refers to all the interactions that take place between an organization and its suppliers. Supplier Experience Management (SXM) is the practice of creating the conditions in which a buying organization and all of its suppliers can achieve mutual success together.

Read our Supplier Experience Survey Report or book your demo of our Supplier Experience Management platform to start achieving mutually beneficial success.

Article updated July 2021

Posted in

Share this post