Camunda Community Podcast

Testing BPMN Processes - Two Approaches

The Camunda Community Podcast, hosted by Josh Wulf. Season 4 Episode 2

In this episode, Simon Zambrovski and I discuss two different "flavors" or approaches to testing BPMN processes - a unit-test approach and a behaviour-driven approach.

We debated making this two episodes, but eventually landed on keeping the two approaches together, so you can consider them at once. If you are familiarwith the unit-testing one that we describe first, skip ahead to learn about behaviour-driven testing.

---

Visit our website.
Connect with us on LinkedIn, Facebook, Mastodon, Threads, and Bluesky.
Check out our videos on YouTube.
Tweet with us.

---

Camunda enables organizations to orchestrate processes across people, systems, and devices to continuously overcome complexity and increase efficiency. A common visual language enables seamless collaboration between business and IT teams to design, automate, and improve end-to-end processes with the required speed, scale, and resilience to remain competitive. Hundreds of enterprises such as Atlassian, ING, and Vodafone orchestrate business-critical processes with Camunda to accelerate digital transformation.

---

Camunda presents this podcast for informational and entertainment purposes only and does not wish or intend to provide any legal, technical, or any other advice or services to the listeners of this podcast. Please see here for the full disclaimer.

Simon Zambrovski:

People out there, please, if you create models, do testing.

Josh Wulf:

Welcome to this episode of the Camunda Nation podcast. I'm your host. My name is Josh Wulf. I'm a developer advocate at Camunda, and I'm once again joined by Camunda champion Simon Zambrovski. This time we're talking about testing business processes. This is a long conversation and Simon and I debated making it two episodes or keeping it as one. In the end, we decided to keep it as one episode, because Simon talks about two distinct approaches to testing and we didn't want to break them up.

Now these are two different flavors, two different philosophies. The first we talk about in the first half of our conversation is the unit test approach, which is the oldest approach. And the second, which we talk about in the second half of our conversation is the behavior driven approach, which is a further evolutionary development. They each have their own pluses and minuses and Simon gives a fair, if not balanced, presentation of the two. He definitely has his own favorite horse in the race. So we kept it as one episode so that you can consider both of them together. So without further ado, let's take it away. Simon, welcome back to the Camunda Nation podcast.

Simon Zambrovski:

Hi Josh. Thank you for having me back. People start with process application and amazed about this visual thing, right? That you can actually draw diagrams and you can discuss diagrams with the Business Department. And sometimes, even Business Department can draw the diagram, which is usually, you can discuss it with a Business Department and you should draw it. And then they say, "Hey, wonderful. Look, we've drawn diagram and now we can execute this." So this is this power of BPMN, right? So we can use this as a visual model and as a technical model. And we need to put some small technical details in this diagram to explain the process engine, what to do on what step, and then we're done, right?

But yeah, so in the end, the professional software development looks slightly different, right? Because you are actually not done, but you just started at that point, because then you created the first piece of the software that runs. And if you're good, it runs for years, right? So the problem exists that people will use it, right? And your organization will use it. Yeah. Since these process models are executed, it's the same as with the code base, right? So you want to make sure that it does what you intended this software to do. And therefore it's a good idea to test that, right? So that's actually it.

So in that moment, if you say, "Okay, this process model stops being just a nice picture to explain what's going on, but becomes an executable model." You should apply the said principles as you would apply to code base, actually, right? Which is executed. So test for that, what actually goes wrong with that model, right? So if I draw on the diagram, what I mean, it's anything there. And there are several answers to this. So one of the answers is, first of all, I mean, if you want to change this, then you are kind of changing what was there towards a new version of it. So you should make sure that you are not changing too much, and you are changing in an expected way.

But also if you're just creating for the first time, there are things that might happen. So first of all, you might make mistakes in using BPMN, right? So you thought it would be like this, because you thought that this is a meaning of that sign. And actually, it's not, right?

Josh Wulf:

Mm-hmm (affirmative).

Simon Zambrovski:

And since something like the syntax of the model is usually checked by the tool itself. So it won't execute anything, which is erroneous, because we are using Camunda Modeler that is not saving bad models, right? And we are also using the engine that is not executing bad models, but there might be mistakes in the meaning, right? So I thought that would mean that, but actually doesn't, right? And so this is one thing. The second thing is you might just get tedious errors, like off by one errors and so on. So like inverting the check and true false is inverted, right? So this is silly, but yeah, sometimes-

Josh Wulf:

Happens.

Simon Zambrovski:

... we make mistakes like that, right? And I think the third one, and this is the most important is that, I mean, the model itself is not executing any code, right? The model itself is just the recipe, what to execute or what to call. And on the other end, you have some software components. And you need to pass correct values to the software components and take the values and write them back to the executed processes, right? So this binding between the process model and the software, and this should be valid, right? Yeah. So these are these, the elements that might go wrong.

Simon Zambrovski:

Actually, I saw all of them going wrong, sometimes in the same time. And the second question is for any, any piece of software, right? So if stuff becomes large and complex, it's easier to make mistakes there. So it's a good idea to test it. So of course, if your process is three step process, then probably it's more difficult to make a mistake. If your process is a large enterprise wide end-to-end process, then probably, you did one. People out there, please, if you create models, do testing.

Josh Wulf:

As a software developer, I'm used to like test driven development, writing unit tests, and you write these small isolated functions, input, output kind of things. That they're a black box and you can test them. And there's a separation of concern in your code that you create it. So it's testable, but with a process it's inherently tightly coupled. And it's the coupling between the units in the process that actually is the process. So I'm thinking to myself, "How do you test a process?" Because it's like I would break it into units and test one, each piece by itself. So I'm very interested to see how this is going to go. How do I test this inherently tightly coupled process? It doesn't really exist as a thing with the pieces in isolation. It's all the things together that make it a process.

Simon Zambrovski:

You are asking the correct question, and you're very close to the answer actually, right? So I mean, even if we draw it like a continuous process from start to the end, the engine is not executing it like that, right? I mean, there are these wait state, and the process is not really run, right? So the engine is more jumping between wait states. So if you start the process and at some point you have a user task, that's a place for the engine to stop, right? Because it knows, okay, the next task will be executed by human. Okay? So the engine will execute all the activities you modeled from start to this user task. Then we will persist all the results in the database. And it'll wait until a human says, "Okay, I'm completed with this user task." Right? Calling actually the API of the engine, right?

Simon Zambrovski:

So there are some kind of parts of this individual solid process and the engine executes parts, one option after another. And this is one of the key points, how you can test this. So it's not completely continuous, right? That's one thing. The second thing of course, what to test is we are not speaking only about the process model, all right? So the process application is the model, and there is this glue code between the model and the actual services. And then there might be services that just coding using process engine, right? Invoking. For example, if we speak about the user task, usually you have some, I don't know, some front end to offer it to the user. The user will use this front end, and this front end is usually tied to some backend component. Like I don't know, risk controllers or whatever.

Simon Zambrovski:

And these risk controllers need to communicate with the process engine to say, "Okay. Hey, the user is ready. This is the input. Please complete this task." So this is a kind of, you're speaking with the process engine and the process engine is not driving in that moment. So it let the human in and says, "Okay, there's a task for you." Right? So there are of course units, as you say. So for example, this functionality that the user task is completed. And if you click on this UI is a logical piece, if you cut it in pieces, you can even say, "Okay, there are multiple units there, like rest controller and some services, and for the front end you might create a different tests and so on."

Simon Zambrovski:

But in general, the point is you're completely right. So the business process is a sequence of states, right? So a sequence of states as a behavior, it's actually not a unit in itself, right? And therefore, you probably know there are approaches, especially for TDD, very popular this behavior driven development, right? Where you actually describe your test as a behavior and test that this behavior is fulfilled by the software, right? So this is one of the major... I won't say it's either unit or behavior. That's not quite true. You can combine these two. I mean, there are two views on how to test stuff, right?

Josh Wulf:

Mm-hmm (affirmative).

Simon Zambrovski:

Yeah. Maybe we should elaborate this a little bit more, because it also applies to the process model tests. So there are two schools there. Let us put it like this.

Josh Wulf:

Okay.

Simon Zambrovski:

It's one science by two schools.

Josh Wulf:

Yes, tabs and spaces.

Simon Zambrovski:

No. This is not religion. This is -

Josh Wulf:

Okay. This is science. We're doing science here.

Simon Zambrovski:

Yes. Okay. So let's maybe start, what independent from the school? So the general approach, what does it mean to test the process model? So the core idea is that Camunda provides an ability to run the engine in a test mode. So this is the basic requirement, and this is one, I think it was. Now, it's being forgotten, I would say. In times back, like 10 years ago, that was the major feature as a unique sellings point of this engine in contract to almost everything that was on the market, if you were able to start the engine in a test mode.

Simon Zambrovski:

And this test mode was not just a switch for and a large application run on a mainframe server. No, no, you could run it from the unit test. So it was like, "Wow, we can start a unit test." Especially at that time, it doesn't matter if it's unit or not, but it was a JUnit framework running the test. And in the JUnit framework, you had an integration or if so called process engine rule, this process engine rule was able to start a process engine and deploy process into it during the test run. So this is the amazing feature that opened up this world of how you should proceed, right?

Simon Zambrovski:

So the general approach is very easy. You start up this, let's say, the JUnit the test. It starts the process engine. You'll deploy the process model inside to it. So you'll say what process model you want to test. Of course, you need to set up the delegate somehow, because the process model references pieces of these software that it's usually delegates or listeners. So these needs to be set up properly. And then what you do is you just say, "Okay, now I'm starting the process, right?" And it just executes new unit test. And then as we said, it's actually not running, but it's jumping to some wait states and this waits states, you are asserting some conditions that these are met and then you push it further and then it goes further.

Simon Zambrovski:

And yeah, this is the way how you can do it one by one executing to the end, okay? So this is the general idea how actually this works. And of course, one of the things is that there is ability to start the process engine in the test mode and run around the process. So this is provided by Camunda engine itself, right? So this is the one key feature of the product, but then also very old one. Now it's, again, part of the software provided by Camunda itself. There is a framework that was initially created, especially Martin Schimak was one of the driving forces in creating this framework.

Simon Zambrovski:

The framework is called Camunda BPM assert, which is an assert library. So primarily, it's an assert library to conditions in the process about process instances and what is related to that, because you don't want actually to call the process engine API, and to say, "Process engine API, tell me... Give me that process and give me that position." And then you want to express the asserts in a way, how you think about the processes, right? So you want to assert something like that, and the process is waiting in task A, or I want to complete the task B, or I want these tasks to be executed to that moment or whatever, right? So this is the way how you think, right?

Josh Wulf:

Mm-hmm (affirmative). So are you describing like orchestrating the engine to execute the process to a certain point or?

Simon Zambrovski:

Yeah. This is actually the place where you have these two schools, right?

Josh Wulf:

Okay.

Simon Zambrovski:

So there is one flavor, it's a unit test, okay? And it should behave like a unit. So Martin says, process is a unit, and I can express to you what it means. So I fully understand why he said that. So he says, "Okay." So if even a complex scenario is a unit, how you would test it, you would say, "Okay, I want to make sure that everything is preloaded. And then I let it run." Right? So there are frameworks like Mockito, for example, where you say, "Okay, I preload behavior on certain points, doesn't matter when they occur, but I have to preload all the places where some decision is made. And then I do the action and in one action, I run to the end." Okay? And in the end, I can sort them, okay? So there is no steps between.

And of course, as we said, since the process is not really running, but what you can easily do is to say, "Okay, I'm now in this wait point." Maybe on start, or maybe on the user task. Now, I preload until the next wait point or wait state of the engine, and then say, "Okay, now go." Okay? And then I want to make sure that it actually runs or jumps to the next wait point in the correct way, and then I assert some stuff about this.

Josh Wulf:

I see.

Simon Zambrovski:

By the way, what can I assert in the engine? So from the point of view of the assertion, there are not many things you can do. So one of the things is, so the state of the process instance is actually where it is located, right? So if there is this execution token analogy, right? Like having a marker. And you can ask the question where it is. So where I am in the process instance, right? So in the process model, point me the activity that is now there. That's one thing you assert. And of course, if there are multiple ways towards this position-

Josh Wulf:

Wait state.

Simon Zambrovski:

Yeah, yeah. Wait state. You can, of course, ask about the path. Have I passed this way or did I came from there? Right? So this is clear. Of course, you can ask if the process is running or not. So if you run to the end, the natural question would be, "Okay. It's not finished? So it's also very easy assertion, and everything else that you are interesting actually in is, what is the state of the process variables? So we spoke about this last time, right? About data. So your processes produce or process data. So you want this data state, right? So you can query actually for the variables and say, "Okay." And if I'm here, I would expect this variable is set or this set to particular value or whatever. So this is kind of nature.

In addition, you can over test, you can say, "Okay." And then make sure that this delegate has been called, and that delegate has been called. Yeah. Because passing around you're calling delegated, calling services, so you can assert this too, right? So verify the action.

Josh Wulf:

Okay.

Simon Zambrovski:

So the unit approach it's more like a marble run, right? So you set up this scene and then you take the ball and then you let it go. And then you say, "Okay, look, now it came out what I expected." Or not, right?

Josh Wulf:

And you do that piece by piece, through the process.

Simon Zambrovski:

Yeah. And you cannot stop the ball during it runs, but it will end up in the next wait state, right?

Josh Wulf:

Which is a human user task.

Simon Zambrovski:

Yeah. There are several wait states, which are natural, user tasks, which, because you mean human task, right?

Josh Wulf:

Yeah.

Simon Zambrovski:

And there is, of course, any kind of waiting events. So if you say, "I want to receive message here."

Josh Wulf:

Yeah. Okay.

Simon Zambrovski:

Or, "I want a timer here." Or, "I want get a signal or that's kind of nature." Then you can also insert these wait states on your own, right? By saying that this activity is asynchronous before or after. So the interesting stuff about these test process engine is that by default, the job executor is deactivated. So usually, all these kind of synchronously executed stuff is performed by a job executor. So for example... So what it does, it goes to this place, it commits all the data, creating this essence point, and then the job executor picks up, and moves the marble further. So there is this hidden force there. And test engine, it's just deactivated, which is wonderful, because then it just stops there. Then you can assert everything you want, and then you can manually, say using the Camunda API, and now, again, move the ball further, right?

And then it again runs through the marble run, and then it stops again. And then you can assert, okay? So this is why it's a unit. And why it's kind of, if you ever used tools like Mockito or something like this, where you have to stop your services or stop behavior in the end. Then you probably know what the problem with that is, right? First of all, there might be multiple of them, so it's maybe not only one service, but maybe like five or 10 or whatever. It depends how long is your -

Josh Wulf:

Exactly that. What about side effects? What about side inputs? Yeah.

Simon Zambrovski:

Yeah. You have to think about this. And the problem is that you cannot... I mean, you have your process model here, but you have to... For every scenario, if you say, "Okay, it should do this and this and this and this, you have to keep it in your mind." Right? You are doing this setup of the stock services, you have to. There is no other model that your mind. Okay. I mean, writing the test is not helping you in doing this. So you can name it like, "Okay in this case, I would like to do this and this and this and this." And put it into the documentation of your test method, or maybe name the method in assembly. But there must be some translation of what you want to do to set up the services, exactly in a way, right?

So it doesn't help you. And if it becomes longer, so if if this test pieces, because for example, you don't have a lot of human interaction in the process, right? So it's like, what we call dark drawers running dark at night. You're booking something, or some machine is doing some calculations and runs like 200 steps in one row-

Josh Wulf:

Machine to machine, that kind of services.

Simon Zambrovski:

... maybe only have... Yeah, machine to machine, maybe having only four or five wait states in it. So you have really large parts of run after one after another, and you have to deal with this it, right? And so this is one kind of, yeah, I would say, it's case badly with large process models, because then you have to keep track of that. And the second one, what to me seems to be a limitation, is since you don't have... Usually, in this kind of test, you don't have... So simplicity is key, right? For unit tests. You have a small unit, so tested like a small unit. So if the unit becomes too large, it becomes unhandy. And so there is no separation between the test code.

So this specification, what should be tested and actually, all kind of boilerplate code that you need to run the application. So, which is usually called application driver. So there is sometimes you separate this and you say, "Okay, there is application driver." So I create an abstraction, how I control the application to execute specific stuff, like creating an API on it, right? And then your test should only call this API. And then your code to test specification becomes clear and easy. But then there is a need of these application driver. And so this is exactly the opposite approach, right? So this application driver thing, and separation the test from the application driver code, is something that is used in this behavioral approach a lot.

So probably, I mean, you are the guy who is a pro in front end, right? So, I mean, behavior sense, scenario based testing is something that is largely vastly used in front end, right?

Josh Wulf:

Yeas. It is. But I'm more a Node.js backend developer, not front end. Yeah.

Simon Zambrovski:

Okay. I thought you were Node.js everything, backend.

Josh Wulf:

All right. Backend Node.js.

Simon Zambrovski:

So, yeah. Yeah. So in the end in front end, you have these, the prominent versus Selenium, right? Where you say, "Okay, I want to test series of page coming up." And because a page flow is of course also a scenario-

Josh Wulf:

Process.

Simon Zambrovski:

... and the behavior, right? That you want to test. So Selenium test usually works like this. You do have these application driver, which are these page objects, right? And your test is only referencing it. So calling actions on them. And the idea there is that if you do that, you can express us in a very clear and business like way, okay? So your page object shouldn't offer you technical methods to say, "Yeah, move mouse 25 pixels to the right." Which would be a very tactical way of controlling something. But it should say what the application does, actually. It should say, "Submit a customer a report or put this to the trash bin or whatever, right? Or delete, delete that." Okay?

So this should be really the language that the business is speaking. And by doing this, your test specification looks very business ready, right? And the advantage of this, that people and that this is the main force in this behavior or BDD stuff is that you should write the specification and language that business people understand to enforce the business IT alignment. And to me, it goes very close to the business process models, which should be and expressed in an a language that business understand.

For the business IT alignment. So it's the same, why should I write tests? Not the same way I write the... Or I create the model, right? Yeah. Again there is this dream of software developers that business people will write a test specification for you and you can train business people to do so. It better works with business analysts. Actually, we have a good experience with that. But in the end, I think the important thing is that business people should be able to read this, right? These tests.

Josh Wulf:

Yes. Anyone who gets too good at writing it will become a developer.

Simon Zambrovski:

Yes, exactly. But the point is what you want to create is you want to create a test suite that creates a readable report, that the business people can read this through this report and verify that it does what it is intended to be. And then it should be like human readable. And this is where these BDD style is very strong in. So you can create actually an application driver, and this application driver hides the entire technical stuff, how to control the engine, how to set variables.

Simon Zambrovski:

This doesn't matter, actually. On the business level, something is like yeah, the order is rejected to the order is accepted. And this is then translated to the correct variables being set, right? And in this scenario you say like, "Okay, if order is rejected, then the manager has to prove it, right? So this is what you want to express, right?

Josh Wulf:

Yes.

Simon Zambrovski:

Yeah. And this is what you're writing in the test specification. So the approach-

Josh Wulf:

This is the second approach, right? The first one, that's unit-driven, one is the-

Simon Zambrovski:

Yeah, yeah. That's [crosstalk 00:28:56] approach.

Josh Wulf:

Okay, Martin Schimak is the unit test. This is the behavior-driven one.

Simon Zambrovski:

Yeah. And the second one is behavior-driven one. And so the approach there is actually, you start the process and then execute it step by step. And every step is basically called to the application driver that does the communication with the process engine and says, "Okay, do this and do this and do that." And yeah, what you get from this. Of course, you have to create this second... So you are not only writing this specification, but you're also writing this application type. You need to do, but for any individual process or application, you're doing, you have a very higher use of this application driver, because you express business means in words and you implemented one and then that's it.

And of course, since you're now created an abstraction in your test. You are able to generate very nice human readable reports out of that. It could be in Gherkin, it could be in something. So this is this behavior-driven thing. And so what we started, so the first thing we did, so we tried to do it with Selenium, but Selenium had one problem. So we decided not to do it with Selenium. The first one we built almost 10 years ago was Camunda-BPM-JBehave. So JBehave is one of the Java-based behavioral-driven development frameworks. And there, you can really write Gherkin. So you supply a text file with a scenario. There is no code. And this is this stream of yeah, being [crosstalk 00:30:51] by human. Completely low code. Exactly.

And there is a syntax and you usually have this given when then stuff and so on. And every step is a call to the application driver. And the application driver does exactly this step by step and moves you forward, okay? So the main advantages in contact to the unit stuff, in unit, your one test is covering a way between one way state, another way state. Now, you have a scenario test that can say, "Okay, I can really test an end-to-end run from the very beginning to the very end, by moving between these states. So kind of a different... So you follow the process, okay? And your steps are following the trust.

So that's the idea, right? Okay. But then JBehave is a difficult framework. So it's very, very flexible. You can do a lot of this, but it's not really handy. So I would say they created two complicated abstractions. For developers, it was not easy to do. And some years ago there was a company there called TNG. They created actually several wonderful test frameworks. One of them is ArchUnit, which is probably very well known, where you can unit test your architecture. And the second one is JGIBA. So they said, "Problem with behavioral-driven testing in Java is you actually, your developers can write whatever they want." So there is no reason for them to write silly text and text files and then generate code out of it.

You can let write them Java code, because I mean, there are pros on that. So you don't need another abstraction to do so. And it was based on the observation that in most projects, there are still Java developers writing this code, right? Even if business can understand it, it will be written by Java developers. So their decision was, "Okay, then let them write this in Java, but let's create a human readable, business readable reports."

So they said, "Okay, what we do is, we give you a framework that is perfectly used from Java ecosystem, with a nice fluent API, with the keywords in Java, like given when then and so on." So it really looks nice for you as a developer, but it's native Java. It compiles it. You don't make mistakes because you put a space on the wrong place, in a text file, right? So this is Java code, but we build a bridge. So we support this. We build a framework that can execute this, and during this execution we can deduce. So we have a very, very good default actually. How to deduce a human readable report based on that, what you have written in your test. So they're doing silly stuff there.

If you write camel case, they will say, "Okay, camel case is obviously, because you want actually to merge multiple words together. So what we will do is we will do lowercase and space in between." And in many situations, you see, "Hey, this is exactly what I wanted to say, actually." Right? So you can name methods, for example, your application, or you can use underscore or whatever. So they split stuff apart and they introduces very funny annotations where you can just relabel actually application driver methods and you can path parameters inside, which can be formatted and rendered as tables and so on. So from this code, yeah there is a small flavor of how you should try this. And without additional cost, you get wonderful HTML and PDF reports regarding all.

I mean, it looks beautiful and it's completely human readable. And we just included it in your CICD pipeline, and so on the run, this is the report for, this is the process testing. So this is what we did with them. So we created Camunda BPM JGiven, which is an official extension, and that supports exactly this style of testing. And sorry, to say the truth, I mean, I'm doing a lot of process application development. So currently, I'm again, in the project where we have like 10 process engines and we have like 100% coverage of every piece of model based on this. So it's scales a lot, and it's not only me that's writing this. So don't misunderstand me. Yeah, you have to understand how it works and then you can use it.

So what is the reason for these Camunda BPM JGiven is it provides you actually a small wrapper about this JGiven framework, which is not for the processes, but it's you can use it for anything, right? So if you want to write BDD test. What we did is we created a default stage for the process testing, which gives you usual steps, which are non-business, but something like the process is waiting at that stage or that step or that activity, or the process is finished. So you have a kind of translation of some commands coming from the BPM assert that are adopted in a way that you can use them easily in a fluent way and so on. So this is the library, plus you get all the repo generation and so on. Yeah. So that's the second approach, right?

Josh Wulf:

That sounds like the one that you are personally -

Simon Zambrovski:

Yeah. This is my perfect. Yeah. I mean, I'm a fanboy of what I created to myself. Yes. But again, to say the truth, it was the second shot, right? So the first one was JBehave, and I believed in the approach, but it failed by the application of the framework. It was too complex to set up, actually. And so we did a second approach and found the right framework, which is basically JGiven from TG, from Munich and adopted that to come on the BPM JGiven, which is now the testing framework of my choice, okay? In the end you can do both. And in the Camunda BPM JGiven, of course, we're using, again, the Camunda BPM assertion library, which is out there.

But I think all of the other testers are doing this. So because in the unit test, you can use assert just as assert only, because you not only have assertions there, you also have ability to control the engine. So there are pieces of application driver in the framework itself. There are some limitations there, but it's usable or no, this sounds too negative. This is not right. In most cases you can use it, okay? And there are some small limitations, if you have parallel run and multi-threading environment, and your loops is difficult and so on. But in most cases you can use it for most processes. It is easy to use.

And then there are some news, so the Camunda platform scenarios, the new one from Martin Schimak, actually, which is using this marble run idea of preloading the behavior, and then letting it go. And then there is a very new one, which is called BPM driven-testing, which is, I haven't spoken with the guy yet. And I will do, because I found out that he's actually living in Hamburg, which is probably 20 kilometers at most from my location right now. So he's a local guy actually, and we are planning to run the next round of the, come on, the meetups in spring, maybe it will be remote one, but then I would definitely invite him and speak with him and so on. So his idea is to say, "Let's do it even more low code." So go back. And if we have the process model, what he did, is he created a model extension, or actually he used the BPMN IO modeler. And you can take the process model and mark a path in it. And to say, "I would like to run this way."

So what is he doing? He's creating a path, and this path gives you a way. This is your test specification, actually. So what you do if you run. Yeah. So currently, he's more in this unit approach because the path is then selected and then you have to preload the services, but I'd like to speak with him and maybe to do it in a way that you say, "Okay, the path creation is actually his USP." So this is what he's doing, creation the visual representation of the path and drawing it. And then be flexible in what framework you use in the end, because having the path, you can of course say, "Okay, produce something like a behavior-driven approach test back out of it, or produce a unit approach path out of it." So it doesn't matter actually, right? -

Josh Wulf:

Yeah. Okay. It's interesting. Yeah, we did an interview with Gabor in September, last year, in the podcast. Yeah. So it'll be interesting to see, because he's taken like a test-driven approach to this, okay? And you want to turn him to the dark side. No, to the behavior-driven side. Both, make it. Why don't we have both?

Simon Zambrovski:

Yeah. So I think it's just a flavor in the end. So important is you do your testing right in the end. But for me, it looks more natural to have. If my process is a series of steps, then my test should be a series of steps. So I don't need another mental model. So then this seems natural to me, and the problem with this not separating code is that, and this is what I observed. So you don't have these problems if you initially created tests for the process model, but you have it if you change. Because if you change something in the beginning, you have to... If you have this unit approach, sometimes, I mean, the BDD approach forces you to decouple tests from the driver, and unit is not. So sometimes you change your lead, and you break like 15 tests and you say, "Oh, wow, that's too much actually, because I should have broken one." Right?

Josh Wulf:

Unit? Yep.

Simon Zambrovski:

Testing, exactly that place, right? So this kind of, it's very difficult to hit what is a correct abstraction. This is one disadvantage that I see. And the second that I'm seeing, let me see, a lot of this test first approach. So my idea is to say, "If you have a problem in mind, what the process should do, you can actually start with writing this specification."

Josh Wulf:

Right. If we get an order and the customer is under 18 and they didn't supply any identification with their application, the order is rejected.

Simon Zambrovski:

Exactly. So you can write this first as a test spec.

Josh Wulf:

Yes.

Simon Zambrovski:

And it's very easy using this Camunda BPM JGiven to do so. I mean, you set up the initial test. You say, "Okay, I have these action stages." I mean, usually, you separate, you have the three stages, like given when then, and for the process test, the given and when stage is the same one, because you're doing actions, right? And it doesn't matter if you do the actions in the given stage. So to who drive the process to a certain place, or if you're doing it in a when stage to make the next step. In iterative way, if you create tests saying, "The first one is testing from A to B, the second is from B to C, the next one from C to D." Right? Then in the last one, you will say, "Okay, the action from A to B to C is actually in given because I have to be in C to test the next step, right?

Josh Wulf:

Yes.

Simon Zambrovski:

And so it's you're just moving it up from one stage to another. So usually, you are reusing the application driver stage. So given, and when is the same, and then is pure assertions, right? Because you don't need to drive in the event stage. Then it's you want to check something. And so if you have the stages, you can just write this down as you're told, right? If the order is submitted and the customer is under 18 and so on, so you just write this. And then he says, "Okay, I don't know what it is." And you say, "Yeah, just create this method stops." And then you have the specification, and then you say, "Okay, now I have to fill it in. What does it mean?" Order is submitted. Then you say, "Okay, man, I'd have to do this and this in the process."

Simon Zambrovski:

So you connect the driver to the process. And if the process is not there, then you just model this and connect it at the same time. So you can do really this kind of test-driven development in a pure form, which is much more difficult if you say, "Okay, I have to preload services that doesn't exist, because you have to create services." Indeed, I mean, what is the driver? Because you have to preload something, you don't have the secrets, okay? And this is why I'm really, I mean, kind of very big fan of this behavior-driven approach, because I think it's follows the same pattern.

Josh Wulf:

So I'm not quite clear on this one part here. You may have covered it. I may just have missed it. But it's like, the process has steps in it, service tasks, it calls out to services' databases. It does things, it gets things back. Now, is it the case that in both of these approaches test-driven the unit test one and the behavior-driven one that you have to write mock for those services?

Simon Zambrovski:

Oh yeah. I probably, I forgot to tell you about this. Yeah. So of course, you have to. The question is, look, we've spoken, what does it mean for the process to change its state, right? Which is changing the position of the process model. So the current activity and changing the variables, right? I mean, there're of course, side effects, because you are not running the process just to change the position usually, but you invoke services, they might resist data, read data, modify the data, whatever, but this is not part of the process test, right? So this is the part of what happens if the services invoked. So this is this clear, the separation from the process point of view, it's just positioned variables. Let's speak about how this binding is executed.

So there are several possibilities to bind code to the process model. So one of them, for example, for service task, and some of others, which are kind of deduced from that, is actually that you provide so called delegate, right? And there are different ways how you provide a delegate. At least three delegates are defined in commander engine. So you can have delegate expression, You can have a delegate, or you can have a full qualified class. So there is this smart sentence saying, "Testing is all. Testability is a feature." Okay? So you might create a code, which is very complicated to test. And one of the things is if you provide a Java class, you have to specify the full qualified class name of the delegate inside your process model, okay? And you can imagine if you do this, the process will go along and say, "Okay, I'm looking at the delegate, I'm just creating a new instance of that by class name.

So it's class for name call. So if you want to test this, probably not the best idea. And there are ways how you can avoid this, but I mean, just don't need it, because the mostly used environment currently, I believe is come on the Spring Boot. So mostly yo can use this expression or delegate expression inside of it. And which means that there is an expression expressed in expression language supported by Java, and there are different like jewel or in Spring it [inaudible 00:48:51], right? So expression, language. So you have this dollar curly brace, and then the name of the Beam, for example, or you can say name of the Beam.name of the method specifying this expression. And so to test it easily or most easily, the easiest way you can do is to use actually delegate expression.

So the delegate expression means that what you put in there is the name of the spring beam and the spring beam needs to implement an interface provided by the engine, which is Java delegate, by the way, this same applies to listeners that are in this concept of listeners. You can put it on any kind of activity. So execution listeners for task, you have additional task listeners. And it's the same. There is a task listen interface, and you can say, "Okay, I'm using delegate expression for this." "And." And the advantage of this, you have an interface, right? So to mark that, and nothing easier than that, right? I mean, you have exactly one method, and there is a library by the way, also created by a friend of my mind, and I also contributed to that. So it's called Camunda-BPM-mockito, which is providing you standard ways of mocking delegates and listeners inside of your process models.

So what you do is you actually, I mean, there is no reason to you. It has a full feature set for any kind of use what you want to do, but I mean, the easiest way, how to do is just to say, "Okay, I set up Mockito." And say, "Okay, Mockito, please mock me Java delegate." And then in the engine register for this expression, register this Java delegate. And then you're actually done, because and then in this situation, Java delegate will... Oh, this mock or this stop will never do anything. It'll just execute, because it's empty. But in your test or in your application driver code, you can say, "If this is executed, please do this and this." So set some variables or throw a BPM error or whatever, right?

Josh Wulf:

Interesting. So it's a mock service that does nothing. And then [crosstalk 00:51:04]-

Simon Zambrovski:

Yeah. There is no service actually, right? These services don't have to exist at the point where you test this, because I mean, you are given a symbolic name in your process model. You say, "This is the order confirmation service delegate." And then in a test, you say, "Okay, for the order confirmation service delegate, I register a mock that at start does nothing." And then for this particular case, you say, "Given, I don't know, all the confirmation automatically rejected." And what it will do is it will set up actually this mock, right? It will say, "Okay, if I'm called, then set rejected to two as a process variable."

Josh Wulf:

So you're basically writing the mock behavior in the test specification.

Simon Zambrovski:

Yeah. And this is, again, if we're in the unit approach, you will write it in your test specification. If you are in the scenario world, you will put it into the application driver specification. So you will create an abstraction saying, "Okay, rejecting the customer order means on that mark set the variable." Right? And in the end it's exactly that.

Josh Wulf:

One of them sounds imperative, and the other one sounds functional.

Simon Zambrovski:

Yes. Kind of.

Josh Wulf:

Yeah. It's imperative versus functional programming again.

Simon Zambrovski:

Maybe.

Josh Wulf:

Yeah.

Simon Zambrovski:

Yeah. So this is important to mention. I think we should cover a little bit more in the future in depth, because this is an important building block for this. So not to, because yeah-

Josh Wulf:

We can draw-

Simon Zambrovski:

... The point is-

Josh Wulf:

... deeper into that.

Simon Zambrovski:

Yeah. Yeah. Your point, why you need this is so you can do very interesting stuff with that, especially... So the mocking itself is not complicated, right? If you're mocking something, then it's easy. But the point is, if you want to verify it really does in a way, because you can't verify it differently later on. That's difficult because then the mark needs to preserve the state and you should be able to query this. So you need to verify, because the only thing you can run on this mark actually, or you can verify this mark that has been called, right? But if the mark becomes more complex, because it's like a mocking object that has a real behavior, then it becomes a little bit more difficult to test, and therefore there is this Camunda-BPM-Mockito.

Simon Zambrovski:

And by the way, what I also, just told you about that, but there is a framework for this, and we just restarted it half a year ago, there is a coverage tool. So now we have a, it's called Camunda BPM Process Test Coverage. So we just released 1.01 two weeks ago or 110. It's on-

Josh Wulf:

GitHub.

Simon Zambrovski:

... On Camunda extension hub. So Community Hub is an official extension. It supports JUnit 4 testing. So if you're using again, doesn't matter what flavor you are using. So the test driver is the question. And the test driver is either JUnit 4 or JUnit 5, or you can say, "It doesn't matter how I start, I'm running an integration Spring test." So Spring and running in test mode, Spring Boot, right? So it's support all three. So that's only the question, how you connect to the engine that is driving actually your test. So that was a core. So the extension is pretty old.

Simon Zambrovski:

So it was already out there. Then it somehow got abandoned. I don't know what happened there, because almost all famous people were there, like [Bert 00:55:29] was participating, [Falco 00:55:30] was participating. Martin Schimak of course, I think he was one of the active members there. It was initially created by ELAP, and so I think there're guys somewhere between Germany and Austria. I believe they're in Austria. And somehow it got a little bit abandoned. And then as I was looking on that, like a year ago, it was like open 45 open issue reported as bugs and so on. And no one was doing anything. And I was like, "Wow, guys. I mean, that's an important tool. Let's-

Josh Wulf:

And it's popular. Yeah. 

Simon Zambrovski:

... let's do it again." But I mean, if you report a null pointer exception inside of the framework code and no one reacts to this, you say like, "Oh, I probably, I should use this in production, right?" Who knows? Because it then break... I mean, and our pointer exception is game over, it breaks, right? It breaks your tests, because you're using a framework that should cover or measure coverage, right? Yeah. So what we did is as a first step, we sorted out all the issues and we... Because they were just, it was not that they had a lot of null point exceptions. Okay. They had some, but the problem was, it was just not maintained, that basically this is not true, or this is already fixed or whatever.

So it was just abandoned open source project. So the first step was to clear out all the issues and address all of this and close this. There is a null point exception, which is actually not true because it's fixed. The second one was to fix all this null point exceptions. So we produced the last version, which was not... So it has no one in the version number, it was zero, seven or zero, five, or zero, I don't remember anymore. So we produced a last version of the tool that was actually, there, was all exception fixed. And then we had a very nice discussion actually on this summit with Dominik Horn, who is also a Camunda champion, regarding, do we want actually to give it another try? And so he agreed, because he was at that moment already, the member of the FlowSquad.

So focusing on providing a solution, as an enterprise solution for built-in your continuous CICD pipeline and so on, so you can monitor our coverage changes over time, like code coverage or [CodeCaf 00:58:05] is doing, right? So he said, "Yeah, let's do that." So we had a design session and then we rebuilt the tool from scratch completely. So it's a full rebuilt now. And then a colleague of mine contributed a lot for the [GA5 00:58:25] and Spring Boot support. So we have now all three frameworks covered. We have a wonderful visualization tool based on BPMN IO provided by Dominik and his colleagues from FlowSquad. So it looks nice and it gives you feedback and yeah. So we are happy it's there again. And we are again on the level of, to say, "Here, here, that's a tool to use." So if you are starting testing, then use this as a visualization of what you missed so far, and what are the branches, because actually in the end you want to cover all branches in your process model, right?

Josh Wulf:

Yes.

Simon Zambrovski:

Yeah. So I think this is... And I'm happy that we got there actually, and especially a very big thank to Dominik and his work, because one thing is doing the backend coding and integration in some testing framework, another thing is to create a beautiful UI that you can show to the customer, and that's a different skill. And so I'm very happy that he helped us and he basically created all this stuff.

Josh Wulf:

Yeah. That's great. Good work. And then it's like resurrecting something that's really important to the community and very widely used. It's amazing. And so that can be used with both approaches-

Simon Zambrovski:

Yeah. That doesn't matter, actually. Yes. So what it does, you can just-

Yeah, of course. So the point is, I mean, this is one of the key things that we especially so [inaudible 01:00:10] are very, very carefully in looking at is to do this separation of concerns. So create small libraries, doing one thing instead of creating suites that do this and this and this and this and that, but not quite so far. I need another library and it's nothing competitive, right? So for Camunda just to tell. So Camunda BPM Assert is slightly mixing. So you have assertions, but you also have elements of driver. You can say, "Complete the task." Right? And what it will do, and you can provide the task ID, but what it'll do, if you don't provide the task ID, it will grab the task that you probably have open at that moment. If it finds some task, it will take this task and complete it.

Okay. So it's a smart functionality, very handy, very useful, but for me has nothing to do with assertions, right? It's application driver functionality. How to move in the engine, right? Or execute job is the same one. Like you have a job that exactly this asynchronous. So you have the job executer that is waiting and you want to push him away. You say, "Execute job." Without specifying anything, it will do it for you. Yeah. So I think that's okay if it's there, but so you don't want to cut it away now, but it was a little bit blurry, right? For Camunda BPM JGiven, it's only the JGiven integration. So providing the process stage and explaining to people how to use it. So it's only about this. You can use any assert library. You can use assert or skip to use assert.

Internally, we are using assert by the way. Then for the coverage testing, coverage measuring, what it does, it does only this, okay? So you say what is the engine. It has to connect to the engine. And especially it has, of course, to put some elements into the engine to react if the engine executes the process to say, "Okay, now I passed this node to market, has passed." And so there is of course an integration, but you can integrate it to anything. And I think if you speak with it... By the way, [FlowCaf 01:02:36] is now renamed, they have a new name, but I don't remember, just came up last week from on LinkedIn. I don't know anymore. But what they did is they do it also not only for testing, right?

So they are, for example, collecting data. So real data from... You can connect it to a running instance. So you can say, "I'm just using this, like optimize exporting from the history and showing you, okay, you passed this 25 times and then you left." I think they can do the same with a real running system. But you need this integration point to the engine. And camunda-bpm-mockito is only about mocking stuff, okay? It does nothing more than to say, "Okay, I provide you, for example, a mock for the delegate or a mock for a listener." But there is more because if we cover... Maybe next time we should cover this, how to test actually delegates. The problem with this is that if the delegate is implementing this Java delegate interface from Camunda, then you need to pass the delegate execution containing all information about your process, right?

Josh Wulf:

Right. Whole process state.

Simon Zambrovski:

Yeah. Which is a process state, right? So camunda-bpm-mockito does that for you. So we have a fake object. You can prefill with stuff you need, like process definition, ID, and process instance ID, and some variables, and then you can pass it into your delegate test. But let's speak about this next time.

Josh Wulf:

So do I have this clear? You can come use camunda-bpm-mockito for testing the process, in which case it's like a NOOP and you can also use it testing the delegate code.

Simon Zambrovski:

Yeah. But it's all about mocking, right? So it's a-

Josh Wulf:

Yeah. Got it. Okay. So today we covered, camunda-bpm-mockito as part of process testing. It's a NOOP, that's pretty much all it does, but then every [crosstalk 01:04:45] process [crosstalk 01:04:46]-

Simon Zambrovski:

Yeah. But I mean, there is not only, I mean, no, it's a NOOP like by default, but you can, you have methods, for example, look, one of the key things you want to mock is if this delegate is executed, I want to set variables, right? Like this and this and this, okay? Or if this market is executed, I want the BPM and error to be thrown. This is a very typical behavior specification for the mock. And camunda-bpm-mockito gives us visibility in one method. So this is a nice thing. You just say, "Okay. Create the mock for me, and then if it's executed, do this." Okay? So you can do more than NOOP. And there are also some smart, but very dangerous features there, like what you can so say is, yeah-

Josh Wulf:

We should do a whole episode on this thing. It sounds like it's a whole world.

Simon Zambrovski:

Yeah, of course.

Josh Wulf:

Yeah.

Simon Zambrovski:

And it's pretty major already, because I mean, we are now, I mean, it's out there for almost, I don't know, seven years, eight years or so, so it's pretty-

Josh Wulf:

Okay. So yeah, it's going to have everything you need in it. Okay. Let's do that. So let's come back and look at testing delegates, and because we did processes, that's one part of the whole thing. Then you got your application code, the services themselves delegates. And then also, I mean, you use camunda-bpm-mockito for that. And how camunda-bpm-mockito, the other features that you can use also when testing processes. Let's do that in the next episode.

Simon Zambrovski:

Yeah. Good idea.

Josh Wulf:

Okay. Awesome. Well, thanks for running through the two principle flavors of testing processes, the unit testing school of Martin Schimak, and then the behavior-driven testing school of Simon Zambrovski. Yeah. And [crosstalk 01:06:43]-

Simon Zambrovski:

It's not my school, but yeah, yeah, yeah. But that's okay. Yeah.I'm at least a teacher.

Josh Wulf:

Have fun.

Simon Zambrovski:

That's okay. Let's put it like this.

Josh Wulf:

Yeah. Awesome. Thank you so much.

Simon Zambrovski:

Thank you, Josh. Thank you for having me.