Smartphones, smart speakers, smart cars, smart coffee makers... the list goes on. It seems like everything around us is coming to life and getting intelligent. And though the sci-fi genre thrives on our ever-present fear of a hostile robot takeover, smart devices are anything but dystopian—they’re actually here to make our lives easier, so we can spend more time on the important stuff, instead of tedious busywork.
Tech companies know that increased automation is the way of the future, just like it was when Ford pioneered the assembly line. Advanced technology like artificial intelligence (AI) and machine learning (ML) is fueling the most exciting innovations in recent history—think self-driving cars, virtual and augmented reality, automated investing, improved medical imaging, and more. The benefits of this technology are becoming more and more obvious, and companies are rushing toward adoption and racing to build it into their products.
As this technology begins to become more commonplace in sensitive, high-stakes spaces (like the auto, medical, and finance industry), it’s vital that DevOps teams take a strong approach to QA testing. When public safety, a customer’s livelihood, or a patient’s data is at risk, even the smartest algorithm must be checked and checked again by a human engineer.
Before diving into the approach for testing smart products, let’s differentiate between artificial intelligence and machine learning. Though the terms are often used interchangeably, there are some key differences.
|Artificial intelligence||Machine learning|
|Applies knowledge or skills||Gains knowledge or skills over time|
|Prioritizes success over accuracy||Prioritizes accuracy over success|
|Simulates natural intelligence||Learns continuously from a set of data|
|Simulates human responses to problems||Creates continuously learning algorithms|
|Searches for the optimal solution||Searches for any solution, whether optimal or not|
Put simply, artificial intelligence refers to a system that performs tasks in a way that we humans might consider smart or efficient, while machine learning is automated, continuous self-training performed by a system that leverages pre-existing data.
Testing artificially intelligent systems
Challenges and potential complications
- Huge volumes of collected data presents storage and analytics challenges—scrubbing this amount of data can be incredibly time-consuming
- Data may be collected during unanticipated events or circumstances, making it difficult to gather and use for training purposes
- Human bias may appear in training and testing data sets
- Defects quickly fester and grow more complex in AI systems
Key aspects of testing
The key to successful AI is good data. Before it’s supplied to an AI system, your data should be scrubbed, cleaned, and validated. Your QA team should be wary of human bias and variety that can complicate the system’s interpretation of the data—think of a car navigation system or smartphone assistant trying to interpret a rare accent.
At the heart of AI is the algorithm, which processes data and generates insights. Some common algorithms relate to learnability (the ability of Netflix or Amazon to learn customer preferences and serve new recommendations), voice recognition (smart speakers), and real-world sensor detection (self-driving cars).
These should be tested thoroughly with model validation, successful learnability, algorithm effectiveness, and core understanding in mind. If there’s any issue with the algorithm, there are sure to be more serious consequences down the road.
Performance and security testing
Just like any other software platform, AI systems require intensive performance and security testing, along with regulatory compliance testing. Without proper testing, niche security breaches (using voice recordings to fool voice recognition software, or chat bot manipulation) will become more common.
Systems integration testing
AI systems are built to hook into other systems and solve problems in a much larger context. For all of these integrations to work correctly, it’s necessary to perform a complete assessment of the AI system and its various connection points. With more and more systems absorbing AI characteristics, it’s vital that they’re tested carefully.
Testing machine learning systems
The goal of ML systems is to acquire knowledge on their own, without being explicitly programmed. This requires a consistent stream of data to be fed into the system—a much more dynamic approach that traditional testing is based on (fixed input = fixed output). Accordingly, QA experts will need to think differently about implementing test strategies for ML systems.
Training data and testing data
Training data is the set of data that is used to train the model for the system. In this data set, the input data is supplied along with the anticipated output. This is typically prepared by collecting data in a semi-automated way.
Testing data is a subset of the training data, logically built to test all the possible combinations and determine how well your model is trained. Based on the results of the test data set, the model will be fine-tuned.
Test suites should be created to validate the system’s model. The principal algorithm analyzes all of the data provided, looks for specific patterns, and uses the results to develop optimal parameters for creating the model. From there, it is refined as the number of iterations and the richness of the data increases.
Communicating test results
QA engineers are used to expressing the results of testing in terms of quality, such as defect leakage or the severity of defects. But the validation of models based on machine algorithms will produce approximations—not exact results. The engineers and stakeholder will need to determine the acceptable level of assurance, within a certain range, for each outcome.
Testing AI and ML systems requires a unique approach that most QA engineers are not yet familiar with. Because the data sets are diverse, dynamic, and need to be constantly refreshed and supplied, you should partner with a QA team that is able to meet these specific needs. Click below to learn more about staying on the leading edge of AI/ML systems testing!