The valid instruction set is defined as follow:
Blank lines (no instruction)
Comments : starting by a double slash '//' (no instruction)
DEFINE $(raw data) AS {nameInTheContext<Res:File>}
LOAD {path_To_Resource} [ FROM {resourceLibrary<Rep>}] [ AS {nameInTheContext<Res:File>} ]
CONVERT {resourceToConvert<Res>} TO {<Cat:Res>} ( {<Conv>} ) [ USING {config<Res>} ] AS {convertedResource<Res>}
EXECUTE {<Cmd>} WITH {<Res>} ON {<Tar>} [ USING {config<Res>} ] AS {result<Res>}
ASSERT {resourceToTest<Res>} ( IS | HAS | DOES ) {<Asr>} [ ( WITH | THAN | THE ) {expectedResult<Res>} ] [ USING {config<Res>} ]
VERIFY {resourceToTest<Res>} ( IS | HAS | DOES ) {<Asr>} [ ( WITH | THAN | THE ) {expectedResult<Res>} ] [ USING {config<Res>} ]
Note: The VERIFY instruction is available since Squash TA framework v1.6.0. It's a new type of assertion instrution
Red words: They represent the language tokens. They are in uppercase and they never change.
Black words: They represent a physical resource.
Blue words: Identifiers which point to a resource component. They have the following structure: {name<Type:Category_name>} or {name<Type>} or {<Type>} with:
name: A name which describe the element that should be pointed by the waited identifier.
Type: The component Type of the element pointed by the identifier: Res for resources, Tar for Targets, Repo for repositories
Category-Name: The Category-Name of the component which wrapps the pointed element.
Pink words: Identifiers which reference an engine component. They have the following structure: {<Type>} with
Type: The engine component type of the element: Cmd for commands, Asr for assertions and Conv for converters
Yellow word: The category-name of the expected resource after a conversion.
[ ]: Element inside this square brackets could be omitted in some cases.
Note : For convenience, 'name' is often use instead of 'identifier' in the documentation.
One instruction per line and one line per instruction. In other words, the end of line means that the instruction ends here and will be parsed as is. The language tokens are case-insensitive and accept inline resource definitions (just like in a DEFINE instruction, see below). On the other hand the identifier we discussed above are case-sensitive (i.e. you should respect lowercase an uppercase letters).
An instruction can be divided into clauses. Some are mandatory while others are optional. In the reference table above you recognize a clause as a language token (uppercased words) and an identifier that immediately follows it.
For each instruction the most obvious mandatory clause is the first one that states which instruction you are referring to. This first clause is also named head clause. The optional clauses are stated here between enclosing brackets '[]'. Those brackets aren't part of the language and just serve the purpose of delimiting those optional clauses. Except for the head clause which determines the kind of instruction, the order of other clauses is not fixed.
Also note that the DSL does not support nested instructions.
TA Scripts can contain comments. They start with a '//'. To write a multiline comment, start each line of the comment with the '//'. It's not allowed to write a comment on the same line that an instruction. For example:
Example of a not allowed comment:
LOAD toto.txt AS toto.file //chargement de la ressource toto
DEFINE $(raw data) AS {nameInTheContext<Res:File>}
Input:
raw data: A character string (If there is more than one line, each line must be separate with '\n')
Output:
{nameInTheContext<Res:File>}: The identifier of the resource created in the test context.
The DEFINE instruction is seldom used but may come handy. Basically it let you define any text content directly within the script, and binds it to a name. This content will be stored in the Test context as a File resource, under the name supplied in the AS clause. This resource will be available throughout the whole test, but won't exist anymore when another test begins.
Example 1 : simple resource define
DEFINE $(select * from MY_TABLE) AS query.file
Example 2 : structured resource definition
DEFINE $(some letters, a tabulation \t and \n the rest after a linefeed.) AS structured-text.file
A more common use for resource definition is to simply inline them within the instruction that will use it.
Example : resource inlined in a conversion instruction.
CONVERT $(select * from MY_TABLE) TO query.sql AS my_query.query.sql
The advantage of explicitly using DEFINE is to bind the newly created File resource to a name, thus allows you to refer to it again later in the script. If you won't need to reuse that resource, an inlined definition is fine.
Inlined resources are notably useful when passing configuration to Engine Components. Engine components sometimes need a few text to be configured properly, which can be inlined instead of explictly creating a file for it.
LOAD {path_To_Resource} [FROM {resourceRepository<Rep>}] [AS {nameInTheContext<Res:File>}]
Input:
{path_To_Resource}: The path to the resource to load
{resourceRepository<Rep>}: The name of the resource repository in which is located the resource to load.
Output:
{nameInTheContext<Res:File>}: The name of the resource created in the test context.
The LOAD instruction will search for a resource in all of the existing repositories. When it is finally found it will be brought to the test context as a File resource. If no AS clause is supplied, the name of this File resource will be the name under which it was searched for (including folder hierarchy if it was hidden in a deep file tree).
The path of the resource doesn't need to be a full URL, as that kind of details will be handled by the repositories. In case of a repository looking for the file system it generally have a base directory, you can then omit the full path and only supply a path relative to the base directory.
Also note that the directory separator is a slash '/' regardless of the underlying operating system. More precisely, no backslashes '\' needed under Windows. Backslashes aren't a valid character for an identifier and will be rejected anyway.
If by chance two or more repositories could answer the query (i.e. if a given file name exists in two file systems, each of them being addressed by a distinct repository), the File resource returned depends on which of them replied first. Consider it as random, and if problems happen you could be interested with the FROM clause (see below).
If the loading fails because the resource was not found the test will end with a status depending on the phase it was executed in.
The FROM clause is optional. If specified, instead of searching every repository for the resource it will search only the one you specified. It may speed up file retrieval if the current pool of repositories includes very slow ones, for a web resource hosted on a very busy server.
The AS clause is optional. If specified, instead of binding the new File resource to the name used in the first clause, the engine will bind it to this alias instead.
Example 1: simple file loading
LOAD data-folder/myfile // that's it, the File resource will be accessible under the name 'data-folder/myfile'
Example 2: load then alias
LOAD long/path/to/the/resource AS my_resource.file
Example 3: load from a specific repository
LOAD myfile FROM my.repository
CONVERT {resourceToConvert<Res>} TO {<Cat:Res>}( <Conv> ) [ USING {config<Res>} ] AS {convertedResource<Res>}
Input:
{resourceToConvert<Res>}: The name of the resource to convert
{<Cat:Res>}: The category_name of the resource expected after the conversion.
<Conv>: The category_name of the converter used for the conversion.
{config<Res>}: The name of the complementary resource needed for the conversion.
Output:
{convertedResource<Res>}: The name of the converted resource.
The CONVERT instruction will take an input resource and produce a new resource, that will then be available under the name mentioned in the AS clause. The resource must exist in the Test context beforehand, for instance as resulting from a LOAD instruction. Remember that no Engine Component will ever modify the input resource, and it will still be available as it was after the conversion is over.
Depending on the invoked converter, a CONVERT instruction will perform at least one of the two operations:
Produce a resource with the same data than the input resource, but wrapped in a different category-name.
Produce a resource with new data based on the input Resource, but the category-name stays the same. Some converters do even both. In any case you should refer to the documentation of this converter
The TO clause is mandatory, as it is where you specify the category-name of the output (which may be the same than the category-name of the input resource). However in some cases it may happen that two or more converters, accepting the same input and output categories, exist together in the engine, thus leading to an error. In such cases one should deambiguate the situation by specifying which specific converter you need. This is the only case where you need to expand the full signature of that Converter. You can specify that converter by immediately appending to the output category the name of that converter, surrounded by parenthesis '()'. Even in the cases where you don't need to specify the converter name, we highly advise to specify it. Indeed this could prevent you from problem if a day a new converter with the same input and output is created and thus making mandatory the converter category.
The optional USING clause let you specify an arbitrary number or resources that will be treated as configuration for this operation. The category-name of resources, or which informations they should convey depends on the converter being used, so having a look at the documentation of that converter is certainly useful
Example 1: simple conversion from file to CSV
CONVERT mydata.file TO csv AS mydata.csv
Example 2: conversion with configuration
CONVERT my_result.resultset TO dataset.dbunit USING $(tablename : MY_TABLE) AS mydata.csv
Example 3: conversion from inlined text to sql type, specifying which converter is used
CONVERT $(select * from MY_TABLE) TO query.sql (query) AS my_query.query.sql
EXECUTE {<Cmd>} WITH {<Res>} ON {<Tar>} [ USING {config<Res>} ] AS {result<Res>}
Input:
{<Cmd>}: The command to execute.
{<Res>}: The name of the resource to use with the command.
{<Tar>}: The name of the target.
{config<Res>}: The name of the complementary resource needed to use with the command.
Output:
{result<Res>}: The name of the resource generated by the command.
The EXECUTE instruction will perform an operation involving a resource (WITH clause), on a given target (ON clause). The result of this operation, if any, will be returned as a resource published in the Test context under the name supplied in the AS clause.
If the operation returns some results, the actual type of the resulting resource depends on the command being executed, so you should refer to the documentation of that command to know how to handle it in the rest of the test.
The optional USING clause let you specify an arbitrary number of resources that will be treated as configuration for this operation. The category-name of resources, or which informations they should convey depends on the command being used, so having a look at the documentation of that command is certainly useful.
You MUST provide an input Resource, a Target and an alias for the result, even if the command do not actually uses all of theses features.
Example 1: command, using a dummy identifier for the result name (because that command doesn't return any)
EXECUTE put WITH my_file.file ON my_ftp AS no_result_anyway
Example 2: command with configuration
EXECUTE get WITH $() ON my_ftp USING $(remotepath : data/the-file.txt, filetype : ascii) AS my_new_file.file
Note that in the last example we used a dummy inlined resource $(), since in that case the 'get' command doesn't use any input resource.
ASSERT {resourceToTest<Res>} ( IS | HAS | DOES ) {<Asr>} [ ( WITH | THAN | THE ) {expectedResult<Res>} ] [ USING {config<Res>} ]
VERIFY {resourceToTest<Res>} ( IS | HAS | DOES ) {<Asr>} [ ( WITH | THAN | THE ) {expectedResult<Res>} ] [ USING {config<Res>} ]
Input:
{resourceToTest<Res>}: The name of the resource to validate.
{<Asr>}: The kind of assertion to use
{expectedResult<Res>}: The name of the reference resource.
{config<Res>}: The name of the complementary resource needed for the assertion.
The assertion instructions will perform a test on the supplied resource, optionally compared to another resource. If the assertion is verified the test will continue. If the assertion failed or finish in error:
In ASSERT mode, the execution of the current test phase is stopped. The teardown test phase is then executed ( if it was not already in this teardown test phase)
In VERIFY mode, the next instructions is executed.
In all cases the test final status would be the most severe status of its instruction. For details on the execution workflow and test status please see this page.
The VERIFY assertion mode is available since Squash TA framework 1.6.0. Before only the ASSERT mode was available.
Note that, unlike other instructions, for the assertion instructions there are multiple choice for some tokens. The first multi-token clause is the one identifying the assertion ({<Asr>}, in the syntax above), the second one is identifying the secondary resource ({expectedResult<Res>}, in the syntax above). In either case you only need to pick one, and it makes sense to pick the one that fits the most to the grammar of the instruction (see examples below).
The optional (WITH | THAN | THE) clause specify another resource, in that case the primary resource will be compared to this secondary resource.
If no (WITH | THAN | THE) clause is used the resource and the assertion are assumed self-sufficient to perform the check. We then talk of a 'unary assertion'. If that clause is used, then we talk of a 'binary assertion', and the primary resource usually represents the actual result from the SUT while the secondary result represents the expected result.
The optional USING clause let you specify an arbitrary number or resources that will be treated as configuration for this operation. The category-name of resources, or which information they should convey depends on the assertion being used, so having a look at the documentation of that assertion is certainly useful.
Example 1: simple unary assertion
ASSERT my_result.result.sahi IS success
Example 2: simple binary assertion (awkward)
ASSERT actual_result.dataset.dbunit IS contain WITH expected_result.dataset.dbunit
The example above the "sentence" is grammatically shocking. Still, although inelegant, it works as expected. But you might prefer :
Example 3: simple binary assertion (better)
ASSERT actual_result.dataset.dbunit DOES contain THE expected_result.dataset.dbunit
This version does exactly the same thing, but is somewhat better.