Shell helps to meet the world’s growing demand for energy in economically, environmentally and socially responsible ways. Shell currently has a project underway to clean their material masters that reside in different enterprise resource planning (ERP) systems and is using mydatafactory to help with the project.
The result of having high quality, consistent and accurate material master data in their systems are numerous, like reliable catalogue search; smooth and flawless procurement; reduced production down time; reduced stock levels; leveraged purchasing power; and contribute to the technical integrity of their installations.
Shell selected mydatafactory for several reasons. Firstly, mydatafactory offers an innovative approach as it was built on artificial intelligence and big data technologies. It offers a self-learning system that builds a knowledge base from user feedback. Shell required a solution that would leverage efficiency and data quality, which is exactly what mydatafactory offered.
Secondly, the solution is available as a software-as-a-service, resulting in competitive pricing and a low threshold project start, as Shell required a solution that enabled a quick start of data cleansing.
The third reason was the extensive knowledge of the mydatafactory team on the subject of product data cleansing, as Shell required their vendor to be able to understand and map their data cleansing processes to the solution they selected.
The project started with preparation in mapping the existing cleansing processes to the mydatafactory application. For flexibility, work packages are defined and assigned to ‘data stewards’. These work packages focus on specific product domains, to align the process of import, classification, term extraction and normalisation, quality assurance and export. The system proposes to the data steward what terms and values to extract and to which characteristics to parse those values.
Mydatafactory also proposed how to standardise those values. Missing or wrong suggestions can be modified by the data steward. This knowledge is captured and re-used for new suggestions. The process is finalised by a ‘domain expert’, who makes any last modifications, and who approves dictionaries, that were built on the fly by the data steward.
During the process a governance framework is built, which holds classifications, characteristics and their values. These data elements can be managed easily by the domain expert. The knowledge that is captured during the project can be made available as part of a gatekeeper function, to prevent any polluted new data from being entered into the ERP’s.