Data Science

Challenges of Data Quality in the AI Ecosystem

Introduction

Artificial Intelligence, or AI, has existed since the late 1960s but it is only now that it has begun to have an impact in the real-world. Affordable high computer hardware aired with cloud-based infrastructure is making it possible to use AI solutions for real-world problems. By 2025, the AI market is estimated to be valued at around $190 billion.

And this value will only keep growing. From healthcare to security, AI is making its presence felt everywhere. But AI implementation is far from smooth. Data quality has always been a challenge and with the evolving AI ecosystem, there are new concerns arising.

Today, it is one of the topmost challenges in successfully implementing AI systems. The types of data errors and their consequences are very different for the AI ecosystem as compared to conventional transaction-based systems.

Systemic Bias

AI and Machine Learning (ML) need very large data sets to train their models. This data may be subject to a systemic bias that could create accuracy issues and potentially violate social norms. The difficulty lies in identifying the bias, one that may not always be apparent, especially when the training data being used is not obviously suspect. For example, let’s say the data used to train the system was collected only from male respondents. In such a scenario, the model would apply exclusively to men.

Large Data Sets Required

AI and ML also need a huge quantity of datasets to train the models at the best level to get the best results. As the quantity of data being used increases, so does the risk of poor data quality. Some of the common problems that emerge include inconsistency in data sets, missing components, lack of balance, duplicate data, redundant data, obsolete data, and issues with integration.

It is possible for these data sets to be screened and to improve the data quality but even then, the bias cannot be completely eliminated. Wherever attention to detail is lacking, coding issues can emerge.

Incorrect Problem Definition

When the large data sets required for AI have a systemic bias, the problem it is being trained to solve may become incorrectly defined. The wrong data may be selected which, in turn, leads to an inadequate result. AI derives results by combining data with algorithms. There may be instances wherein the algorithm includes prejudicial data definitions or when the data used to train the system and derive the algorithms does not correctly reflect the global database.

This problem can be compounded when data used in real-time is collected from an entirely different source as compared to the training data. The only way to overcome this challenge is to understand the data and algorithms used for each scenario in terms of the desired results.

Need For Static Models

Even after AI systems have been trained, they need static models. This is because, though AI does provide a certain amount of flexibility in terms of pattern discovery and creating workable models specific to different cases, any small change in the conditions reflected by the data being used can result in errors. This also means that results derived by AI systems need to be constantly monitored to ensure that incorrect conclusions and new biases are not derived.

Lack Of Access To Good Quality Data

Poor quality data is seriously handicapping the potential of AI. A study at Dataversity shows that while 76% of businesses want to leverage data and extract value, only around 15% have access to the type of data required to achieve this.

Bad data, inadequate systems, data silos, and compliance issues are amongst the most common reasons for this. Data quality is a big issue with historical data that has been gathered from multiple sources. This data typically falls prey to inconsistent standards and varying accuracy levels.

Premature Abortion of AI Projects

AI was intended to cut costs and boost profits. But many enterprises that started to implement AI systems are seeing that their investment will not bear fruit until they have access to better quality data.

In the current scenario, a large percentage of work involved in AI projects is connected to preparing data for the system. This is a considerable investment that many companies are not willing to make. As a result, AI projects are being abandoned. Another reason for this is the skewed results of systemic biases in the data sets. This is affecting the faith people have in data-driven decisions. It comes down to a simple statement: Garbage in; Garbage out.

AI Solutions Are Custom-Designed

As a result of the type of data being used to train the system and the inherent biases, AI solutions are seen as customer-designed to a single scenario. A lack of comprehensive, good quality data means that the system needs to be individually trained to each variation and different cases. This requires additional effort and investments. Hence, it is not always cost-effective.

Ensuring Better Data Quality

Improving data quality will have a direct impact on the speed of AI implementation. To achieve this, attention must be paid to data capturing, selection, cleaning, and storage. As with other applications, for data quality to be considered high, every record must be unique, accurate, complete, and relevant.

The good news is that there are a number of tools and software that can help achieve this. Verification software runs the data in a database against reliable third-party databases to validate it and wherever possible, correct and complete it. Something as simple as making sure all address fields have correct pin codes can make a great difference in the AI system it is used in.

In addition, data needs to be processed in such a way that it becomes suitable for AI models. Raw data may need to be split into discreet values to become usable. Non-numeric features of a data set cannot be incorporated into AI systems and need to be encoded beforehand. This does take effort but doing so will improve the performance of AI systems and speed up their application in real-time situations.

Original post: https://dzone.com/articles/challenges-of-data-quality-in-the-ai-ecosystem

AI and ML also need a huge quantity of datasets to train the models at the best level to get the best results. As the quantity of data being used increases, so does the risk of poor data quality. Some of the common problems that emerge include inconsistency in data sets, missing components, lack of balance, duplicate data, redundant data, obsolete data, and issues with integration.

Source: https://adolfoeliazat.com/2021/09/10/challenges-of-data-quality-in-the-ai-ecosystem/

Donovan Larsen

Donovan is a columnist and associate editor at the Dark News. He has written on everything from the politics to diversity issues in the workplace.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button