What's Up with Tech?

AI Test Engineers: The Future of QA

Evan Kirstel

Interested in being a guest? Email us at admin@evankirstel.com

Testing has become the hidden bottleneck in software development. While companies invest heavily in AI-powered coding tools to accelerate development, quality assurance can't keep pace—until now. 

Tal Barmeir, serial entrepreneur and co-founder of BlinqIO, reveals how his company's AI test engineer is fundamentally changing the testing landscape. This isn't just another automation tool; it's an intelligent system that autonomously creates, executes, and maintains test code by understanding the business logic behind requirements. When a user interface changes, the AI doesn't break—it adapts, just as a human would.

The most revolutionary aspect? Solving the seemingly impossible challenge of comprehensive testing coverage. Traditional testing requires enormous resources to cover multiple browsers, devices, screen sizes, and languages. Blinq's AI test engineer handles all variations from a single test requirement, effectively "speaking" 50+ languages and navigating different UI layouts with human-like intelligence.

Contrary to replacement fears, this technology transforms rather than eliminates human roles. Testers evolve from frantically coding automation scripts to managing and auditing AI-generated work. Organizations finally have a path to achieve 100% testing coverage—something most have given up on due to resource constraints.

Getting started takes minutes, using familiar test recording workflows enhanced with AI capabilities. The system creates business-logic descriptions rather than mechanical UI interaction scripts, enabling intelligent adaptation as applications evolve.

Ready to eliminate your testing bottleneck? Discover how an AI test engineer could transform your quality assurance while accelerating your entire software development lifecycle.

Support the show

More at https://linktr.ee/EvanKirstel

Speaker 1:

Hey everybody, fascinating chat. Today we talk about the rise of the AI test engineer and its implication for software development with an innovator in this space at Blink Tal. How are you?

Speaker 2:

Good, good Hi, evan, thank you for having me here.

Speaker 1:

Thanks for being here, fascinating topic and journey you're on. Before that, maybe introduce yourself your bio background and what's the big idea behind Blinkio.

Speaker 2:

Right, so I'm Tal Barmeir. I'm a serial entrepreneur. This is the third company that me and the other co-founder, guy Rielly, have set up together, all of them in software testing. So we're very much focused on this area and we really, you know, as ChatGPT and some other generative AI innovation came to the world, we were looking at ideas, how we can leverage that to actually accelerate and solve some of the very painful problems in testing, in software testing, and we came up with the idea to create an AI test engineer. As the name indicates, it's a test engineer that is operated solely by generative AI.

Speaker 1:

Amazing. So what's the state of best practices, state of the art for testing? Is it still a job for the QA department? Who owns quality these days for a product or service?

Speaker 2:

So that's a very good question, and there are basically two main structures that you see today. You have organizations working in the more traditional way, where you have a QA organization focused on the quality, and there you'd have a hierarchy where you have some people creating test automation code that tests automatically. These would be programmers and some people that are manual testers. So that's one way where the QA organization is an independent department and works focused only on QA. But what we see more and more is what we call shift left, and that's when the QA is actually pushed over to the development organization, and the developers that program the features of the product are also tasked at creating test automation that would test that the feature they developed is working properly, and in that case the product programmers are the ones that are also programming the test automation.

Speaker 1:

Yeah, it's good to see where this is at the moment. So there's a lot of fear of missing out or FOMO as companies rush in to automate their testing. We've had automation for some time, but it really is just a surge of investment in AI, software development and testing. You've seen announcement by OpenAI. You've seen Salesforce's news about not hiring developers this year. You know on and on and on. But what are some of the big mistakes that you see out there when teams rush into automation with all of this excitement?

Speaker 2:

So I think it's a very interesting question because you know, as all of us know, there's a lot of buzz around generative AI and how people programmers and other technical people can actually leverage it, and what we see is that a lot of the organizations are focused on how can you accelerate the product development side of the house, and they forget that one of the biggest bottlenecks is actually on the testing part of the house.

Speaker 2:

So we see a lot of organizations implementing and integrating co-pilots and similar tools to accelerate product development and coding, and you get every so often today announcements about how much of the product code is created actually by AI, but then all of this code, all of this very good and innovative code, is pushed over to the testing organization, which is a bit less in focus. Then you find that a lot of the of the of the bottleneck and the holdbacks are actually still kept there and they're not able to keep up with the pace of of development happening on the product side of the house. So I think that's a big mistake not looking at the overall software release cycle, just focusing on the part that is many times considered more sexy, you know, which is the product development, but then the testing side just as important, and that's actually a gatekeeper. So if you don't implement generative AI throughout the software development cycle, you won't be able to receive the benefits you want to have.

Speaker 1:

Interesting. So, when you talk about big companies, could have thousands of developers distributed around the world and you know this space so well. What are some of the surprises or unexpected challenges, let's say, when they try to automate testing at such a huge scale?

Speaker 2:

So I think one of the biggest challenges in automating testing and that's the reason we actually set up BlinkIO and try to bring generative AI into the picture is that you never get enough good test automation coders to create the test automation code.

Speaker 2:

So test automation is great and once it's there it's really automated, but it's still done. The scripting itself, the coding itself, is done manually by humans, and it happens to be that most of the coders, or many of them, go to the product development side of the house and not to the testing side of the house, and you're left with the big challenge of how do you recruit and maintain good coders to create test automation code? This is exactly where we came into the picture and created the AI test engineer that does exactly that. It codes the test automation code and not only does it code it initially, it also knows to maintain it and autonomously self-heal it when their UI changes in the product. And that miracle happens because it works against the business logic of the test requirement. So, even if the UI changes, it goes to the test requirement and it knows what was the purpose of the test, what was the business logic of the test requirement, and it knows to update the test automation code accordingly and align it with the updated UI.

Speaker 1:

Fantastic. So what is realistic when it comes to quality these days? Are there certain targets or objectives? I mean, everyone wants perfect code, but what's the reality, both with your tool and with existing practices?

Speaker 2:

So that's a very interesting question.

Speaker 2:

At the end of the day, you come to organizations and you tell them you know what is the status of your testing and almost always you find out that they still have very limited coverage in test automation because they're chasing after the maintenance challenge of keeping up with all the product UI changes and updating the test automation code and also for that very reason, they're using a lot of manual testing.

Speaker 2:

So I think that that's one of the biggest challenge how do you actually get coverage? Now, one of the interesting things when you bring generative AI into the picture is the following the testing metrics is complex because of two main reasons. One is multi-platform the fact that you need to test an application or a web app on several browsers and screen sizes and mobile devices and tablets and what have you. And the other thing that makes the testing metrics very complex is languages. So let's say you have, you know you have a bank application and needs to be. You know in Spanish and English, in Japanese and French and in you know whatever, so you need to actually test across a very large matrix. What's interesting about generative AI is, first of all, it speaks 50 languages.

Speaker 2:

So it doesn't really care if it's going to test now JP Morgan application in Japanese, in Spanish, in French or in English, it's exactly the same. You can think about it as a manual tester that speaks mother tongue all of those languages. So that's a big thing. You give it one time the test requirement, you can give it in English, and it will know to execute and implement it across all of those language variations of the application or web app. The other aspect is that, since the generative AI works against a test requirement similar to what a manual human tester would receive, it is able to figure it out even if the application or web app is collapsed on a very small screen and is a three-line hamburger menu or if it's a wide menu which is spread all over a large screen, so it actually knows to find its way around and figure out the UI, even if it is not a UI which is identical in all forms.

Speaker 2:

This means that now, in order to test, let's say, a mobile application or a web application, you can have one single English description, or French description or Spanish description, whatever you prefer of what the test requirement is, to execute that test across all language variations of that application and against all screen sizes and mobile devices. This is huge news for the testing industry that has been struggling for many, many years trying to have coverage for this huge matrix of languages and platforms and now got this super AI human capable, which is able to do this without any additional inputs.

Speaker 1:

Amazing. So, as you know, there's a big debate about automation replacing people, particularly in software development, and there's some early signs that a lot of entry-level positions are being subsumed, maybe, or disrupted, and a lot of, of course, entry-level software people start in testing. So what do you think? Will automation get rid of manual testing completely? What are humans still good for when it comes to testing and software development?

Speaker 2:

So I have two very important points to put on this. First of all, all organizations today have a huge backlog in testing. They're far, far away from having testing covered. So if there is any way to boost the productivity of the testing organization, it will probably be for the better of everyone, without any person going home, you know, just the fact that you can actually finally reach a hundred percent coverage, all languages, all platforms, all versions of your application backward compatibility.

Speaker 2:

There's this huge matrix which usually today is never actually met, and all the bugs and issues and flops that happen are because of that. So, first of all, there's a huge backlog, so it would be great if productivity could be boosted with AI. But the other thing is that AI is not working totally on its own. There is always and also in the Blinkio AI test engineer, there is always the human audit, human oversight, because AI can make mistakes, and if it makes mistakes, the mistakes can be big. So what you basically have a bit similar to the way people manage other people.

Speaker 2:

The AI does its work, but the people that before that were frantically trying to type in test automation code are now sitting back as managers of those AI test engineers and overseeing the work they do, approving changes they want to submit into the main branch, doing sort of, if you wish, code reviews on what they're doing. And this is very important to understand because, at the end of the day, ai is a way to significantly boost the productivity of humans, but humans stay still in the loop. So it is important to understand this is not a black box. This is something that is being done but audited by humans.

Speaker 1:

Wow, such a great point. You don't often hear that. Specifically, another big topic I was at RSA recently, the big security show, and this was really top of mind. Using AI bots to define vulnerabilities and zero days before the hackers do Is this part of your approach?

Speaker 2:

Yes, we believe that AI, because it works around the clock and it's always there, makes a lot of the process, including, as you just mentioned now, security processes, but also testing processes become seamless because you don't have to wait on people in the process. You do have people auditing the results and everything, but you're not dependent. It's a bit similar to what Jira did to project management. So in the past, you know if you had a project. You'd need to wait the next day for the project manager to collect all the inputs present, some sort of Gantt ask everyone, you know what everyone did, where they are, align everyone and so forth and allocate new tasks for the team members. And now with Jira, you know, just go in the morning, you see what your tasks are, everything is up to date, it just happens. It sort of happens seamlessly in the background, and the same is actually happening for testing. So with generative AI, the ability to use testing for testing happening in the background without it being dependent on different functions, human functions in the middle that is super important.

Speaker 1:

Fantastic. So it was only a few years ago. I remember, when people were skeptical about AI. Now it's just a matter of how and when. What about your customers? What are they telling you? Do they have concerns or are they embracing? You know the AI test engineer wholeheartedly. What's some of the feedback you're getting from customers?

Speaker 2:

I think you know we're specifically, you know, solving a very painful problem in the testing market, I'd say almost a problem that people gave up on and decided, instead of trying to recruit all the test automation coders they need and business analysts to analyze the failed test results and allocate them to the relevant people for recovery and so forth.

Speaker 2:

They almost gave up on that and said, okay, you know, whatever you know, these are the number of test automation coders we have, and what we're going to do is actually do risk management by testing less than we need and, you know, basically betting that the risk will not materialize. So I think that now that they see that they suddenly have this endless pool of resources that can work in all languages on all platforms around the clock, cost a fraction of even trying to recruit all these people and effectively knowing that they could have never recruited all the people that they needed, that they were trying to recruit for years unsuccessfully. This is a very big message to the testing market and we see customers embracing this very positively because it solves a very, very painful problem.

Speaker 1:

Amazing. So how does your AI test engineer figure out which tests are actually the most important ones to run? And is it just scale, or how does that decision making work?

Speaker 2:

So it actually builds, it sort of learns the. So I'd say that there are two types of applications. There are applications that are well documented in the internet open AI and general purpose LLM applications that are well-documented in the Internet. When you give the BlinkIO AI test engineer an application such as Salesforce or ServiceNow or HubSpot or SAP and so forth, it basically knows those applications ahead of time. It's able to suggest test scenarios, it's able to provide you indication on the amount of coverage that you have, and so forth. If you go on, applications which are not documented are not public applications, let's say internal applications of banks, such as mortgage applications or whatever. These basically the AI, the BlinkIO AI test engineer, learns the application and develops what we call a RAG model. That has the understanding of the application, has sort of been born with regarding Salesforce, sap and so forth applications, also on your internal applications which are not publicly documented. So it has a self-learning capability of the application you provided, even if it is an application that has not been documented in the open Internet.

Speaker 1:

Amazing. And so how easy is it to get started, in terms of complexity, time investment, with an AI test engineer? This is a new concept for many companies.

Speaker 2:

you know, big and small companies you know, big and small. It's very easy. So what we basically did was we tried to bridge the way people are used to work today with all sorts of recorders and stuff. So you have something that looks a bit like the existing code list or other test recorders. You just click through a flow. You have certainly you can add all sorts of verification on the way and what it does. It basically figures out what was the flow you did and it creates a test requirement on a business-led logic level, not on a.

Speaker 2:

I clicked in the user name and the password and then I clicked on the login and no, it says I logged in with a user and password. That's it. So if tomorrow it's not a user and passwords, it's a, it's an email and pin or anything else. It would know to figure it out exactly like a human test engineer would figure it out with the human intelligence. It's as intelligent as a human tester. So it is able to go to the business logic description that it created when you just click through the flow and basically update the test automation code it created initially to align with any changes in the UI. The initial, the starting to start using a BlinkIO AI test engineer is a matter of minutes. It's very straightforward. It's a bit similar to sort of old recorders, but it actually generates code, has all the intelligence in creating the business logic test description, the code maintaining the code against UI changes, analyzing failed test executions. So it's as simple as sort of the old recorders but as powerful as you can get with AI.

Speaker 1:

Amazing. So let's look at the future. What's next over the next year, two, three years for BlinkIO, and are you going to have an AI tech support engineer and an AI developer?

Speaker 2:

as well is often overlooked by a lot of the players, but is just as important as the developer market and we will be releasing. So. We currently support web application testing. We'll have support for mobile application testing by end of this financial year, by December 25. And then we'll have full API testing support beginning of next year, alongside with performance and load testing.

Speaker 1:

Wow, amazing. Well, congratulations on all the success. Incredible work Onwards and upwards.

Speaker 2:

Thank you.

Speaker 1:

And thanks everyone for listening and watching and reach out with questions. But thanks for sharing and be sure to check out our new TV show, techimpact TV, now on Fox, business and Bloomberg. Thanks so much.