In a comprehensive blog post, João Proença covers how to use the BDDFramework component to automate BDD testing in OutSystems. Your Complete Guide to BDD Testing in Outsystems includes these topics:
An introduction to the BDDFramework
Text execution with the BDDFramework REST API
Data-driven API tests
You can now successfully run parallel tests. Doing so in previous versions produced unpredictable results, such as a failed test without failed assertions or a passed test with failed assertions. Keep in mind that you still need to write your test code to ensure it is ready to be executed in parallel. Generally, this means planning your test data so as to avoid data collisions among tests running in parallel (for example, two tests that create the same “John Doe” record in an “Employees” entity).
If your test code executes inside a single database transaction, you can complete the teardown step through a rollback instead of deleting generated test data. This typically applies in cases where:
No commits occur inside the business code being tested. Anything that happens prior to commit still needs to be explicitly deleted/reverted.
No service action calls or REST/SOAP service calls occur inside the business code being tested. Anything that happens inside those calls is part of a different transaction and may still need to be explicitly undone.
Tests run about 15 to 30 times faster than in previous versions, depending on the test type. This speeds the test feedback loops, especially for large test portfolios.
New functionality allows you to create tags and then associate one or more tags with test scenarios during development. These user-defined tags allow you more granular control over your test portfolio. Tags may identify characteristics such as:
Test priority or criticality
Test complexity and speed
Test purpose (business rules, API, DB consistency, and error handling)
Test mapping to a specific part of your application
Tags are a new base block in the BDDFramework that you import into your test module. When using the BDDFramework application template, a base tag template is included in your initial module. Tags are meant to be generic and reusable. You should define each tag in its own web block and then add it to relevant tests. Additionally, create tags in a separate UI Flow (for example TestTags).
We recommend creating a new tag from the template. To do so:
Copy the Template_Tag block into your new TestTags flow.
Rename the copied block. For example, if this is a tag to categorize critical tests, call it CriticalTag.
Select the TagLabel local variable that exists inside the block and configure its default value. This value will define your tag and is used when filtering tests to execute. In the previous example, the value could be Critical.
If you’re working in a test module from an older BDDFramework version that does not include the tag template, we recommend creating one from scratch. To create a tag template from scratch:
Create a new block inside your TestTags flow.
Name the new block. For example, if the purpose of the tag is to categorize Critical tests, call it CriticalTag.
Create a new local variable called TagLabel, and configure its default value. This value defines your tag and is used when filtering tests to execute.
Search for the BDDTag block reference in the BDDFramework module and drag it inside your block. TrueChange highlights it in red because the block requires an input parameter.
Bind the TagLabel local variable as the required input for the BDDTag block.
Drag the TagLabel local variable onto the BDDTag Tag placeholder.
The BDD Scenario screen has a Tags placeholder, where you can drag tags to your test scenario. The Tags placeholder is between the scenario Description and Setup placeholder fields. You can add one or more tags to the Tags placeholder.
Adding tags to scenarios is easy. You simply search for the tag in the TestTags flow and drag it to the Tag placeholder in your scenario block. And you're done!
You can use tags to specify tests to execute or skip. The new API accepts two new optional parameters in the header:
SkipTags--A comma-separated list of tags that specifies tests to skip. This parameter supersedes ExecuteTags (That is, the same tag in both SkipTags and ExecuteTags is skipped and not executed).
ExecuteTags--A comma-separated list of tags that specfity tests to execute. With no tags specified, all tests execute, except those included in SkipTags.
Tests execute based on the following logic.
A test executes when:
It has no tags in the SkipTags parameter
AND it has at least one tag in the ExecuteTags parameter OR the ExecuteTags parameter is empty
A test is skipped when:
It has at least one tag in the SkipTags parameter
OR it does not have any tags in the ExecuteTags parameter AND the ExecuteTags parameter is not empty
Based on feedback, we've added a new version of the execution API that provides more information about test execution. The existing API (v1) is still compatible as well. Let's take a look at what's new with API v2.
The API v2 supports these two new header parameters, which are described earlier in this article:
SkipTags
ExecuteTags
The API v2 output includes the new output attributes defined here:
{ "SuiteScreen": "string", "IsSuccess": true, "SuccessfulScenarios": 0, "FailedScenarios": 0, "SkippedScenarios": 0, // number of tests skipped, as specified in the SkipTags parameter "TestScenarioResults": [ // list of tests that exist inside the test screen (executed or not) { "ScenarioId": "string", // scenario ID, as specified in the "Scenario Identifier" placeholder "Description": "string", // scenario description, as specified in the "Scenario Description" placeholder "IsSuccess": true, // indicates if this scenario was successful or not "IsSkipped": true, // indicates if the scenario was skipped, as specified in the SkipTags parameter "FailureReport": "string", // given-when-then detail, which appears only if a scenario fails "Tags": [ // list of all the tags associated with the scenario "string" ] } ], "ErrorMessage": "string"}
{
"SuiteScreen": "string",
"IsSuccess": true,
"SuccessfulScenarios": 0,
"FailedScenarios": 0,
"SkippedScenarios": 0, // number of tests skipped, as specified in the SkipTags parameter
"TestScenarioResults": [ // list of tests that exist inside the test screen (executed or not)
"ScenarioId": "string", // scenario ID, as specified in the "Scenario Identifier" placeholder
"Description": "string", // scenario description, as specified in the "Scenario Description" placeholder
"IsSuccess": true, // indicates if this scenario was successful or not
"IsSkipped": true, // indicates if the scenario was skipped, as specified in the SkipTags parameter
"FailureReport": "string", // given-when-then detail, which appears only if a scenario fails
"Tags": [ // list of all the tags associated with the scenario
"string"
]
}
],
"ErrorMessage": "string"