As someone new to the AI scene, I’ve been attempting to find discussions and examples of how people are using AI to facilitate the various QA tasks a person would execute during a typical SDLC.
I’ve found a few random posts online, but they’re all pretty basic and vague; not anything that would give any meaningful guidance.
The most common thing I hear about is “writing test cases”. My concern here, and maybe this is showing my ignorance of AI in general, is how would you go about verifying that those test cases were a complete set of cases? Do they cover real world scenarios? Do they cover all variables? etc.
I realize there are methods of having the AI check itself and you can put specific guidelines in there for it to follow, but even that only improves the chance of success, not guarantees 100%. Sure, people make mistakes, too, but I’m struggling to see where this is better/more efficient/effective than just writing the test cases manually?
What am I missing?
Can you say a little about what SDLC means?
Remember that not everyone on the forum shares the same specialized backgrounds from different industries.
Blockquote
Can you say a little about what SDLC means?
the Software Development Life Cycle (SDLC)
Hi @dhartman
In the Generative AI for Everyone or AI For Everyone the presented concept is to “automate tasks”. List of task for QA can be find here: 15-1253.00 - Software Quality Assurance Analysts and Testers
(Maybe we can ask ChatGPT or Perplexity for some inspirations? )
But just from the top of my head AI can help with:
- documentation,
- test cases,
- prepare dummy testing data,
- suggest improvements,
- define process,
- etc.
My favorite idea which I know only from theory but looks promising is Spidering AI
However I think in general thinking about AI for quality can be much bigger idea than that. There are tools such as Orca Security which already use AI to improve security.
GitHub Copilot can help as a peer reviewer and improve code quality.
There were also interesting case with Ethereum where AI was able to find in the white paper some vulnerability.
So I believe imagination can be the limit and I am sure there will be more and more tools for that.
Thanks for the response! I see some things in here that I need to investigate.
As I think you alluded to, a lot of obvious things come to mind such as test cases and documentation, etc. I find those things to be efficiency enhancers and a bit superficial. That isn’t a criticism, though. There is still value in those efforts.
What I feel, though, is that AI has the potential of being transformative with how QA/testing is done at a fundamental level. I’m guessing those items you pointed out may illustrate by beliefs at least to a degree.
Thanks, again, for directing me to these.
In the context of a traditional Software Development Life Cycle (SDLC), Artificial Intelligence (AI) is increasingly being utilized for Quality Assurance (QA) to enhance testing processes and efficiency. AI-based QA tools are employed for tasks such as automated test script generation, test data generation, and even defect prediction.
These tools leverage machine learning algorithms to analyze historical data, identify patterns, and predict potential areas of concern in the software.