Getting Started
This document introduces Touca SDK for Python through a simple example. If you are new to Touca, consider reading our general quickstart guide first.
Touca SDK for Python is available as open-source on GitHub under the Apache-2.0 License. It is publicly available on PyPI and can be pulled as a dependency using pip.
1
pip install touca
Copied!
In this tutorial, we will use Touca examples repository whose Python examples already list this package as a dependency. Clone this repository to a local directory of your choice and proceed with building its Python examples.
1
git clone [email protected]/trytouca/examples.git "<YOUR_LOCAL_DIRECTORY>"
2
cd "<YOUR_LOCAL_DIRECTORY>/python"
3
python -m venv .env
4
source .env/bin/activate
5
pip install touca
Copied!
For our first example, let us write a Touca test for a software that checks whether a given number is prime. You can find a possible first implementation in ./python/01_python_minimal/is_prime.py of the examples repository.
1
def is_prime(number: int):
2
for i in range(2, number):
3
if number % i == 0:
4
return False
5
return 1 < number
Copied!
While this implementation is trivial, it represents our code under test with arbitrary complexity that may involve many nested function calls, interacting with systems and services. So while we can test the above code with unit tests, it may not be so easy to write and maintain unit tests for future versions of our software.
With Touca, we can test our software workflows regardless of their complexity. Here is an example Touca test for our is_prime software using the Touca SDK for Python.
1
import touca
2
from is_prime import is_prime
3
4
@touca.Workflow
5
def is_prime_test(testcase: str):
6
touca.check("is_prime_output", is_prime(int(testcase)))
7
8
if __name__ == "__main__":
9
touca.run()
Copied!
Typical Touca tests do not list the inputs that we use to test our workflow. We did not specify the expected output of our software either. This is unlike unit tests in which we hard-code our input values and their corresponding expected values.
With Touca, we would just define how to run our workflow under test for any given test case. We would capture values of interesting variables and runtime of important functions to describe the behavior and performance of our workflow for that test case. We would leave it to Touca to compare our description against that of a previous trusted version of our workflow and report the differences as our test cases are executed.
Let us run our Touca test, passing 13 as a sample input:
1
python 01_python_minimal/is_prime_test.js --team tutorial --suite is_prime --revision 1.0 --offline --testcase 13
Copied!
This command produces the following output.
1
Touca Test Framework
2
Suite: is_prime
3
Revision: 1.0
4
5
( 1 of 1 ) 13 (pass, 1 ms)
6
7
Processed 1 of 1 testcases
8
Test completed in 5 ms
Copied!
The Touca test framework passes 13 as the testcase parameter to our test workflow. We convert this testcase to a number and pass it to our code under test. We capture the actual value returned by our software as a Touca test result. This value is stored in its original data type, in a binary file ./results/is_prime/1.0/13/touca.bin.
Every time we make changes to our code under test, we can repeat this process with the same set of test cases. We could compare the generated binary files to check whether our code changes impact the overall behavior and performance of our software.
1
touca_cli compare --src=./results/is_prime/1.0/13/touca.bin --dst=./results/is_prime/1.0/13/touca.bin
Copied!
But this method is only useful if we test our workflow under test with hundreds of test cases which would make dealing with result files very inconvenient. Fortunately, Touca has a server instance that can be self-hosted or cloud-hosted to manage and compare test results and report their differences.
You will need API Key and API URL to submit test results.
If you have not already, create an account at app.touca.io. Once you make a new suite, the server shows an API Key and an API URL that you can use to submit test results.
1
export TOUCA_API_KEY="8073c34f-a48c-4e69-af9f-405b9943f8cc"
2
export TOUCA_API_URL="https://api.touca.io/@/tutorial/prime-test"
3
echo -e "19\n51\n97" > testcases.txt
4
python 01_python_minimal/is_prime_test.py --testcase-file testcases.txt --revision 1.0
Copied!
1
Touca Test Framework
2
Suite: prime-test
3
Revision: 1.0
4
5
( 1 of 3 ) 19 (pass, 1 ms)
6
( 2 of 3 ) 51 (pass, 0 ms)
7
( 3 of 3 ) 97 (pass, 0 ms)
8
9
Processed 3 of 3 testcases
10
Test completed in 574 ms
Copied!
Touca server after submitting results for v1.0
Now if we make changes to our workflow under test, we can rerun this test and rely on Touca to check if our changes affected the behavior or performance of our software.
1
python 01_python_minimal/is_prime_test.js --revision 2.0
Copied!
1
Touca Test Framework
2
Suite: prime-test
3
Revision: 2.0
4
5
( 1 of 3 ) 19 (pass, 1 ms)
6
( 2 of 3 ) 51 (pass, 0 ms)
7
( 3 of 3 ) 97 (pass, 0 ms)
8
9
Processed 3 of 3 testcases
10
Test completed in 546 ms
Copied!
Touca server after submitting results for v2.0
In our example, we captured the output of our workflow using touca.check. But unlike integration tests, we are not bound to the output of our workflow. We can capture any number of data points and from anywhere within our code. This is specially useful if our workflow has multiple stages. We can capture the output of each stage without publicly exposing its API. If the behavior of that stage changes in a future version of our code, we can leverage the captured output to find the root cause more easily.
In the next documents, we will learn how to use Touca SDK for Python to test real-world software workflows.
Last modified 21h ago
Copy link
Edit on GitHub