Current ‘technology stacks’ fuel tension between business and IT
In a recent HICX survey conducted with Raconteur, 88% of procurement experts confirmed that ownership of supplier data was shared between Procurement and at least one other function – which in 85% of cases included IT. This situation leads to sub-optimal outcomes, a point backed up by the same survey in which 89% of total respondents also confirmed that they do not have total oversight into their suppliers.
This article investigates why common configurations, such as using a single vendor for P2P, master data governance and ERP have become the de facto standard, how issues arise from this configuration, and the negative impacts of this on the enterprise.
Lack of autonomy, lack of trust
For example, even in the case of this simple, single vendor technology approach, activities such as supplier management remain difficult. Procurement is often left without the autonomy to optimize workflows and processes relating to their function. The situation can cause tension between IT and business, due to the difference in the expected outcomes of IT projects versus business expectations. It means outputted data is not trusted or cannot be used to evaluate wider business questions or decisions.
Supplier master data management is a particular source of tension, as it naturally spans the entire organization in one way or another due to everyone in an enterprise having some form of interaction with suppliers. Organizations are currently seeking a common governance mechanism and solution that can be used across the enterprise, a trend which puts it even more into the spotlight.
Utilizing existing available solutions in the tech stack
Let’s take an enterprise that has invested in SAP architecture as an example, and that is in need of a supplier portal in order to onboard suppliers. In terms of SAP architecture, we’ll assume that there is a SAP ERP in the environment and SAP Master Data Governance for Suppliers (MDG-S) as part of the Master Data Management (MDM) configuration.
In this case, as SAP MDG-S is a competent data governance tool, internal approvers could log in to MDG-S to carry out data validation. However, this creates a poor user experience that needs back office support since internal business users and suppliers cannot access MDG. As far as suppliers are concerned, it would be preferable for IT to put forward SAP’s P2P solution, Ariba, as the logical front end for suppliers to add their information – and for users to go through the approval workflows so that these steps would be conducted in a system instead of manually.
With SAP Ariba as the front end, it means that the P2P system becomes the master for supplier data and the single portal for all suppliers to the business. This is a logical conclusion and it aligns well with IT’s preferred architectural landscape, utilizing options already in the toolkit. It is, however, for enterprises that have embarked on this journey, where issues with such a configuration arise. It’s taking the wrong tools to the job when it comes to the wider objective of data quality.
So what’s going on? What can we learn from enterprises that have this set-up?
It’s all to do with how data enters the enterprise and is handled going forward.
The headache of how data flows
The first point to note is that the P2P is built for transactional data, not for master data. This is an important distinction because it defines the relationship of the P2P to the ERP, in which instance the P2P is subservient to the ERP.
Think of it in the consumer world. If a customer pays with a loyalty card, it creates one profile for the customer. If the same customer does not use the loyalty card and makes a cash payment, then the transactional data means a second profile is – has to be – created to complete the transaction, and the customer is counted as two, instead of one. The emphasis is on the efficient completion of the transaction. The same applies for supplier data. The data is created at different times, in different ways, for different purposes. This could be for contractual negotiations or sourcing, for example. The outcome is also the same – and, due to the myriad permutations in supplier relations, is decidedly worse: If you have a second ERP instance, even if it’s from the same vendor (and it so often isn’t), your P2P will have a second vendor master – creating duplicated supplier data, resulting in inaccurate or incomplete analytics, as well as inefficiencies for the business user and procurement team. As one example, when sourcing is carried out, there will be profiles for suppliers who send in RFIs that may not end up being used, but who do end up in the system – and that in itself creates more noise and difficulties (for example if you extract a list of suppliers, you then need to determine who is actually an active supplier).
So, while the flow of data works, the data is not going to be robust as it is already compromised due to the necessary relationships between each system.
Further, as a transactional system, Ariba just passes the data on, and that’s it. It cannot accept corrections back the other way. So there is inevitable disconnect between Ariba P2P and the vendor master in the ERP. But what of SAP MDG? The same applies. If a vendor fills out details in Ariba, this goes to SAP MDG – and if the governance takes place here, then the workflows necessarily should be built here. As mentioned already, this is not optimal as the data is already circulating and there is no need to have the correct or complete data – provided that the transaction completes. There is, then, no easy workflow, if the data is wrong, for MDG to reject this and send it back to Ariba for the vendor to correct.
Sub-optimal outcomes and hot fixes
If data from SAP Ariba, or any P2P, is wrong you are left with three sub-optimal options:
- Manually fix it, in which case you end up with a manual process anyway
- Simply allow bad data into the ERP(s)
- The most common choice: Allow the data through, have the ERP flag a mismatch and send the known data back to Ariba to allow transactions and accept that are discrepancies between the Ariba sourcing (contract) data and the P2P master data
The net impact of these compromises means that performance or supplier management is impossible, there will be ongoing pressure to initiate data cleansing programs and, when data is required for emergency reports, for example in the case of COVID-19, the reliability will be questioned. Longer-term, this will erode any trust in the data, reduce its value and potentially create tension or mistrust between departments or functions, such as Procurement and IT; and even Procurement and Finance.
Conclusion
Many organizations are waking up to the fact that this is a compromise too far. What they seek is a front end portal that can ensure that all data validation and approvals are done before it enters any system. Because once bad data is flowing in systems, experience tells us that it is very hard to correct later on.
The importance of the P2P suite in a modern buying organization inside a large, complex business, is undeniable, but experience in recent years has shown that a system designed to manage spend, invoicing and transactions really shouldn’t be forced so far out of its comfort zone. They simply are not set up for the purpose of managing supplier information on behalf of all other systems and across all user needs.
Instead Procurement needs to become more focused on the strategic rather than on purely tactical considerations:
- Determine and document the use cases for the data (both in Procurement and for the wider organization) and evaluate technologies’ abilities to meet the criteria of the use cases
- Evaluate how supplier master data management solutions can be best supported by a domain-specific solution that also matches the strategic direction of IT
- Work in collaboration with IT and other functions to drive forward digital transformation as it relates specifically to the Procurement division
If you found this blog useful, then you may want to check out our other detailed resources as well, covering different aspects of master data management, data cleansing, data governance and more.