QASource recently presented the webinar "Accelerate Your Automation With API Testing." In this webinar, QASource CEO Rajeev Rai discussed API testing's rise in popularity, and the necessity and benefits of testing APIs. The webinar also offered tips on how to perform API testing and best practices.
We presented this webinar twice in September due to its popularity. During both live presentations, attendees were given the opportunity to ask questions, but we were unable to get to them all. Here, our expert API engineers answered them.
- I am working in the big data field: apache spark, casandra etc. Are there any tools that are recommended specifically for API testing?
There are many tools available in the market that can help automate or test your APIs, such as Postman, ReadyAPI, REST Assured and Jersey Jackson. Before choosing any of these tools, however, please make sure that you evaluate them based on parameters such as the ones mentioned in our 10 Steps to Start API Testing checklist in order to be certain they satisfy all your present and future needs.
When dealing with big data, you also need to ensure that your API tool or tests support performance evaluation of APIs, so that you can validate the response times and performance benchmarks based on your product's needs. For example, if you need to test REST API, then you can use either Postman, ReadyAPI, REST Assured or Jersey Jackson. However, if you want to test SOAP requests, then you can use SoapUI or ReadyAPI.
One of the challenges with API testing is circumventing "False Positives," specifically when dealing with dynamic values. A NULL can return a 200 response, which may go unnoticed when running through a large number of automated scripts. Do you have any recommendations of how to deal with this situation?
As we mentioned in the best practices portion of the webinar, we recommend always adding multiple assertion points in every API test, which will not only validate the 200 response code but also the corresponding response body parameters. This will result in multiple assertion points that will make the test script more robust and reduce false positives.
When is a good time to start API testing for a new feature?
API testing can be started even while the Dev team is building the new feature. The QA team can start automating their tests using the mock services and be ready with their test skeleton before the feature is ready. As a standard practice, we recommend that the API testing team should be involved from the beginning of the new feature, and work alongside the Dev team. This will help with development, finding and fixing bugs early and reducing development cost.
- How do you link your API tests to the manual tests?
API tests can be linked to manual tests by setting up test data or executing prerequisite steps prior to executing the manual test case. APIs can also be used to transition scenarios to a particular stage in the workflow, enabling the engineer to test only the affected stage of the workflow — saving effort, time and cost.
- Can you expand on the "Integrated UI Automation" benefit from the presentation?
Integrated UI Automation means using product APIs to enhance your UI automation. In this approach, product APIs are used to set up prerequisites and test data prior to executing UI tests. With this approach, the coverage of UI automation can also extend to areas where no UI interface exists. Because of this, we get to see the following benefits from “Integrated UI automation”:
- Faster execution of UI suites
- Improved reliability of test results
- Reduced false positives
- Improved coverage
For example, if we are testing a user profile update module through UI, then we will need to register a new user and login with new user first in order to test updated user profile scenarios. Using the "Integrated UI automation" approach, we can automate the new user registration prerequisite step through the API and then integrate it with the automated UI test case where the newly registered user (created through the API) logs in and updates the user profile. This would help make the GUI automation tests more reliable, robust and faster.
- Do you generate your test case data automatically? How? And what about test case data clean up so that the tests can run correctly next time?
Yes, we generate test data automatically in our before suite and before test methods. This can be set up through API calls, through seeding scripts or through automated UI tests. Once the test data is created or set up at run time, we store it in a test data file after the completion of the test suite.
For test data cleanup, we implement the tear down approach after executing each test suite in order to delete the test data from the application under test. This way after finishing test cases execution, the application always reverts back to a known and controlled state and the test suite can be executed with the same set of test data again and again.
If you missed "Accelerate Your Automation with API Testing" or want to watch it again,
click the button below.
Interested in other webinars by QASource? Browse our collection here.