diff --git a/AGENTS.md b/AGENTS.md index db0790606..27eed72a5 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -61,15 +61,14 @@ Each extension is a **separate PyPI package** with its own `setup.cfg`, `setup.p ## Examples (integration test suite) -The `examples/` directory serves as both user-facing documentation and the project's integration test suite. Examples are validated in CI using [mechanical-markdown](https://pypi.org/project/mechanical-markdown/), which executes bash code blocks from README files and asserts expected output. +The `examples/` directory serves as both user-facing documentation and the project's integration test suite. Examples are validated by pytest-based integration tests in `tests/integration/`. -**See `examples/AGENTS.md`** for the full guide on example structure, validation, mechanical-markdown STEP blocks, and how to add new examples. +**See `examples/AGENTS.md`** for the full guide on example structure and how to add new examples. Quick reference: ```bash -tox -e examples # Run all examples (needs Dapr runtime) -tox -e example-component -- state_store # Run a single example -cd examples && ./validate.sh state_store # Run directly +tox -e integration # Run all examples (needs Dapr runtime) +tox -e integration -- test_state_store.py # Run a single example ``` ## Python version support @@ -107,8 +106,8 @@ tox -e ruff # Run type checking tox -e type -# Validate examples (requires Dapr runtime) -tox -e examples +# Run integration tests / validate examples (requires Dapr runtime) +tox -e integration ``` To run tests directly without tox: @@ -190,9 +189,8 @@ When completing any task on this project, work through this checklist. Not every ### Examples (integration tests) - [ ] If you added a new user-facing feature or building block, add or update an example in `examples/` -- [ ] Ensure the example README has `` blocks with `expected_stdout_lines` so it is validated in CI -- [ ] If you added a new example, register it in `tox.ini` under `[testenv:examples]` -- [ ] If you changed output format of existing functionality, update `expected_stdout_lines` in affected example READMEs +- [ ] Add a corresponding pytest integration test in `tests/integration/` +- [ ] If you changed output format of existing functionality, update expected output in the affected integration tests - [ ] See `examples/AGENTS.md` for full details on writing examples ### Documentation @@ -204,7 +202,7 @@ When completing any task on this project, work through this checklist. Not every - [ ] Run `tox -e ruff` — linting must be clean - [ ] Run `tox -e py311` (or your Python version) — all unit tests must pass -- [ ] If you touched examples: `tox -e example-component -- ` to validate locally +- [ ] If you touched examples: `tox -e integration -- test_.py` to validate locally - [ ] Commits must be signed off for DCO: `git commit -s` ## Important files @@ -219,7 +217,7 @@ When completing any task on this project, work through this checklist. Not every | `dev-requirements.txt` | Development/test dependencies | | `dapr/version/__init__.py` | SDK version string | | `ext/*/setup.cfg` | Extension package metadata and dependencies | -| `examples/validate.sh` | Entry point for mechanical-markdown example validation | +| `tests/integration/` | Pytest-based integration tests that validate examples | ## Gotchas @@ -228,6 +226,6 @@ When completing any task on this project, work through this checklist. Not every - **Extension independence**: Each extension is a separate PyPI package. Core SDK changes should not break extensions; extension changes should not require core SDK changes unless intentional. - **DCO signoff**: PRs will be blocked by the DCO bot if commits lack `Signed-off-by`. Always use `git commit -s`. - **Ruff version pinned**: Dev requirements pin `ruff === 0.14.1`. Use this exact version to match CI. -- **Examples are integration tests**: Changing output format (log messages, print statements) can break example validation. Always check `expected_stdout_lines` in example READMEs when modifying user-visible output. +- **Examples are integration tests**: Changing output format (log messages, print statements) can break integration tests. Always check expected output in `tests/integration/` when modifying user-visible output. - **Background processes in examples**: Examples that start background services (servers, subscribers) must include a cleanup step to stop them, or CI will hang. - **Workflow is the most active area**: See `ext/dapr-ext-workflow/AGENTS.md` for workflow-specific architecture and constraints. diff --git a/CLAUDE.md b/CLAUDE.md index 43c994c2d..84e2b1166 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -1 +1,11 @@ @AGENTS.md + +Use pathlib instead of os.path. +Use modern Python (3.10+) features. +Make all code strongly typed. +Keep conditional nesting to a minimum, and use guard clauses when possible. +Aim for medium "visual complexity": use intermediate variables to store results of nested/complex function calls, but don't create a new variable for everything. +Avoid comments unless there is an unusual gotcha, a complex algorithm or anything an experienced code reviewer needs to be aware of. Focus on making better Google-style docstrings instead. + +The user is not always right. Be skeptical and do not blindly comply if something doesn't make sense. +Code should be production-ready. \ No newline at end of file diff --git a/README.md b/README.md index e212cbd43..333be6933 100644 --- a/README.md +++ b/README.md @@ -121,19 +121,17 @@ tox -e py311 tox -e type ``` -8. Run examples +8. Run integration tests (validates the examples) ```bash -tox -e examples +tox -e integration ``` -[Dapr Mechanical Markdown](https://github.com/dapr/mechanical-markdown) is used to test the examples. - If you need to run the examples against a pre-released version of the runtime, you can use the following command: - Get your daprd runtime binary from [here](https://github.com/dapr/dapr/releases) for your platform. - Copy the binary to your dapr home folder at $HOME/.dapr/bin/daprd. Or using dapr cli directly: `dapr init --runtime-version ` -- Now you can run the example with `tox -e examples`. +- Now you can run the examples with `tox -e integration`. ## Documentation diff --git a/examples/AGENTS.md b/examples/AGENTS.md index 677470d60..5dcbdd4ec 100644 --- a/examples/AGENTS.md +++ b/examples/AGENTS.md @@ -1,26 +1,22 @@ # AGENTS.md — Dapr Python SDK Examples -The `examples/` directory serves as both **user-facing documentation** and the project's **integration test suite**. Each example is a self-contained application validated automatically in CI using [mechanical-markdown](https://pypi.org/project/mechanical-markdown/), which executes bash code blocks embedded in README files and asserts expected output. +The `examples/` directory serves as both **user-facing documentation** and the project's **integration test suite**. Each example is a self-contained application validated by pytest-based integration tests in `tests/integration/`. ## How validation works -1. `examples/validate.sh` is the entry point — it `cd`s into an example directory and runs `mm.py -l README.md` -2. `mm.py` (mechanical-markdown) parses `` HTML comment blocks in the README -3. Each STEP block wraps a fenced bash code block that gets executed -4. stdout/stderr is captured and checked against `expected_stdout_lines` / `expected_stderr_lines` -5. Validation fails if any expected output line is missing +1. Each example has a corresponding test file in `tests/integration/` (e.g., `test_state_store.py`) +2. Tests use a `DaprRunner` helper (defined in `conftest.py`) that wraps `dapr run` commands +3. `DaprRunner.run()` executes a command and captures stdout; `DaprRunner.start()`/`stop()` manage background services +4. Tests assert that expected output lines appear in the captured output Run examples locally (requires a running Dapr runtime via `dapr init`): ```bash # All examples -tox -e examples +tox -e integration # Single example -tox -e example-component -- state_store - -# Or directly -cd examples && ./validate.sh state_store +tox -e integration -- test_state_store.py ``` In CI (`validate_examples.yaml`), examples run on all supported Python versions (3.10-3.14) on Ubuntu with a full Dapr runtime including Docker, Redis, and (for LLM examples) Ollama. @@ -31,7 +27,7 @@ Each example follows this pattern: ``` examples// -├── README.md # Documentation + mechanical-markdown STEP blocks (REQUIRED) +├── README.md # Documentation (REQUIRED) ├── *.py # Python application files ├── requirements.txt # Dependencies (optional — many examples rely on the installed SDK) ├── components/ # Dapr component YAML configs (if needed) @@ -46,53 +42,6 @@ Common Python file naming conventions: - Client/caller side: `*-caller.py`, `publisher.py`, `*_client.py` - Standalone: `state_store.py`, `crypto.py`, etc. -## Mechanical-markdown STEP block format - -STEP blocks are HTML comments wrapping fenced bash code in the README: - -````markdown - - -```bash -dapr run --app-id myapp --resources-path ./components/ python3 example.py -``` - - -```` - -### STEP block attributes - -| Attribute | Description | -|-----------|-------------| -| `name` | Descriptive name for the step | -| `expected_stdout_lines` | List of strings that must appear in stdout | -| `expected_stderr_lines` | List of strings that must appear in stderr | -| `background` | `true` to run in background (for long-running services) | -| `sleep` | Seconds to wait after starting before moving to the next step | -| `timeout_seconds` | Max seconds before the step is killed | -| `output_match_mode` | `substring` for partial matching (default is exact) | -| `match_order` | `none` if output lines can appear in any order | - -### Tips for writing STEP blocks - -- Use `background: true` with `sleep:` for services that need to stay running (servers, subscribers) -- Use `timeout_seconds:` to prevent CI hangs on broken examples -- Use `output_match_mode: substring` when output contains timestamps or dynamic content -- Use `match_order: none` when multiple concurrent operations produce unpredictable ordering -- Always include a cleanup step (e.g., `dapr stop --app-id ...`) when using background processes -- Make `expected_stdout_lines` specific enough to validate correctness, but not so brittle they break on cosmetic changes -- Dapr prefixes app output with `== APP ==` — use this in expected lines - ## Dapr component YAML format Components in `components/` directories follow the standard Dapr resource format: @@ -182,69 +131,18 @@ The `workflow` example includes: `simple.py`, `task_chaining.py`, `fan_out_fan_i 1. Create a directory under `examples/` with a descriptive kebab-case name 2. Add Python source files and a `requirements.txt` referencing the needed SDK packages 3. Add Dapr component YAMLs in a `components/` subdirectory if the example uses state, pubsub, etc. -4. Write a `README.md` with: - - Introduction explaining what the example demonstrates - - Pre-requisites section (Dapr CLI, Python 3.10+, any special tools) - - Install instructions (`pip3 install dapr dapr-ext-grpc` etc.) - - Running instructions with `` blocks wrapping `dapr run` commands - - Expected output section - - Cleanup step to stop background processes -5. Register the example in `tox.ini` under `[testenv:examples]` commands: - ``` - ./validate.sh your-example-name - ``` -6. Test locally: `cd examples && ./validate.sh your-example-name` - -## Common README template - -```markdown -# Dapr [Building Block] Example - -This example demonstrates how to use the Dapr [building block] API with the Python SDK. - -## Pre-requisites - -- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started) -- Python 3.10+ - -## Install Dapr python-SDK - -\`\`\`bash -pip3 install dapr dapr-ext-grpc -\`\`\` - -## Run the example - - - -\`\`\`bash -dapr run --app-id myapp --resources-path ./components/ python3 example.py -\`\`\` - - - -## Cleanup - - - -\`\`\`bash -dapr stop --app-id myapp -\`\`\` - - -``` +4. Write a `README.md` with introduction, pre-requisites, install instructions, and running instructions +5. Add a corresponding test in `tests/integration/test_.py`: + - Use the `@pytest.mark.example_dir('')` marker to set the working directory + - Use `dapr.run()` for scripts that exit on their own, `dapr.start()`/`dapr.stop()` for long-running services + - Assert expected output lines appear in the captured output +6. Test locally: `tox -e integration -- test_.py` ## Gotchas -- **Output format changes break CI**: If you modify print statements or log output in SDK code, check whether any example's `expected_stdout_lines` depend on that output. -- **Background processes must be cleaned up**: Missing cleanup steps cause CI to hang. +- **Output format changes break tests**: If you modify print statements or log output in SDK code, check whether any integration test's expected lines depend on that output. +- **Background processes must be cleaned up**: The `DaprRunner` fixture handles cleanup on teardown, but tests should still call `dapr.stop()` to capture output. - **Dapr prefixes output**: Application stdout appears as `== APP == ` when run via `dapr run`. - **Redis is available in CI**: The CI environment has Redis running on `localhost:6379` — most component YAMLs use this. - **Some examples need special setup**: `crypto` generates keys, `configuration` seeds Redis, `conversation` needs LLM config — check individual READMEs. +- **Infinite-loop example scripts**: Some example scripts (e.g., `invoke-caller.py`) have `while True` loops for demo purposes. Integration tests must either bypass these with HTTP API calls or use `dapr.run(until=...)` for early termination. \ No newline at end of file diff --git a/examples/validate.sh b/examples/validate.sh deleted file mode 100755 index 202fcaedd..000000000 --- a/examples/validate.sh +++ /dev/null @@ -1,4 +0,0 @@ -#!/bin/sh -echo "Home: $HOME" - -cd $1 && mm.py -l README.md diff --git a/pyproject.toml b/pyproject.toml index 2186f5074..f2dc537f0 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -22,3 +22,8 @@ ignore = ["E501","E203", "E712", "E722", "E713"] [tool.ruff.format] quote-style = 'single' + +[tool.pytest.ini_options] +markers = [ + 'example_dir(name): set the example directory for the dapr fixture', +] diff --git a/tests/integration/conftest.py b/tests/integration/conftest.py new file mode 100644 index 000000000..2cc7f33e9 --- /dev/null +++ b/tests/integration/conftest.py @@ -0,0 +1,166 @@ +import signal +import subprocess +import threading +import time +from pathlib import Path +from typing import Any, Generator + +import pytest + +REPO_ROOT = Path(__file__).resolve().parent.parent.parent +EXAMPLES_DIR = REPO_ROOT / 'examples' + + +class DaprRunner: + """Helper to run `dapr run` commands and capture output.""" + + def __init__(self, cwd: Path) -> None: + self._cwd = cwd + self._bg: subprocess.Popen[str] | None = None + self._bg_lines: list[str] = [] + self._bg_reader: threading.Thread | None = None + + def _spawn(self, args: str) -> subprocess.Popen[str]: + return subprocess.Popen( + f'dapr run {args}', + shell=True, + cwd=self._cwd, + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT, + text=True, + ) + + @staticmethod + def _terminate(proc: subprocess.Popen[str]) -> None: + if proc.poll() is not None: + return + proc.send_signal(signal.SIGTERM) + try: + proc.wait(timeout=10) + except subprocess.TimeoutExpired: + proc.kill() + proc.wait() + + def run(self, args: str, *, timeout: int = 30, until: list[str] | None = None) -> str: + """Run a `dapr run` command, stream output, and return it. + + Args: + args: Arguments passed to ``dapr run``. + timeout: Maximum seconds to wait before killing the process. + until: If provided, the process is terminated as soon as every + string in this list has appeared in the accumulated output. + """ + proc = self._spawn(args) + lines: list[str] = [] + remaining = set(until) if until else set() + assert proc.stdout is not None + + # Kill the process if it exceeds the timeout. A background timer is + # needed because `for line in proc.stdout` blocks indefinitely when + # the child never exits. + timer = threading.Timer(timeout, proc.kill) + timer.start() + + try: + for line in proc.stdout: + print(line, end='', flush=True) + lines.append(line) + if remaining: + output_so_far = ''.join(lines) + remaining = {exp for exp in remaining if exp not in output_so_far} + if not remaining: + break + finally: + timer.cancel() + self._terminate(proc) + + return ''.join(lines) + + def start(self, args: str, *, wait: int = 5) -> subprocess.Popen[str]: + """Start a `dapr run` command in the background and return the handle. + + A reader thread continuously drains stdout so the pipe buffer never + fills up (which would block the child process). + """ + proc = self._spawn(args) + self._bg = proc + self._bg_lines = [] + + def drain() -> None: + assert proc.stdout is not None + for line in proc.stdout: + print(line, end='', flush=True) + self._bg_lines.append(line) + + self._bg_reader = threading.Thread(target=drain, daemon=True) + self._bg_reader.start() + time.sleep(wait) + return proc + + def stop(self, proc: subprocess.Popen[str]) -> str: + """Stop a background process and return its captured output.""" + self._terminate(proc) + self._bg = None + if self._bg_reader is not None: + self._bg_reader.join(timeout=5) + self._bg_reader = None + output = ''.join(self._bg_lines) + self._bg_lines = [] + return output + + def cleanup(self) -> None: + """Stop the background process if still running (teardown safety net).""" + if self._bg is not None: + self._terminate(self._bg) + self._bg = None + if self._bg_reader is not None: + self._bg_reader.join(timeout=5) + self._bg_reader = None + self._bg_lines = [] + + +def assert_lines_in_output(output: str, expected_lines: list[str], *, ordered: bool = True) -> None: + """Assert that each expected line appears as a substring in the output. + + Args: + output: The combined stdout/stderr string. + expected_lines: List of strings that must appear in the output. + ordered: If True, the expected lines must appear in order. + """ + missing = [line for line in expected_lines if line not in output] + assert not missing, ( + f'Missing expected lines in output:\n Missing: {missing}\n Output:\n{output}' + ) + + if not ordered: + return + + positions = [output.index(line) for line in expected_lines] + out_of_order = [ + (expected_lines[i], expected_lines[i + 1]) + for i in range(len(positions) - 1) + if positions[i] > positions[i + 1] + ] + assert not out_of_order, ( + f'Lines appeared out of order:\n Out of order pairs: {out_of_order}\n Output:\n{output}' + ) + + +@pytest.fixture +def dapr(request: pytest.FixtureRequest) -> Generator[DaprRunner, Any, None]: + """Provides a DaprRunner scoped to an example directory. + + Use the ``example_dir`` marker to select which example: + + @pytest.mark.example_dir('state_store') + def test_something(dapr): + ... + + Defaults to the examples root if no marker is set. + """ + marker = request.node.get_closest_marker('example_dir') + cwd = EXAMPLES_DIR / marker.args[0] if marker else EXAMPLES_DIR + + runner = DaprRunner(cwd) + yield runner + runner.cleanup() diff --git a/tests/integration/test_configuration.py b/tests/integration/test_configuration.py new file mode 100644 index 000000000..9660e142d --- /dev/null +++ b/tests/integration/test_configuration.py @@ -0,0 +1,51 @@ +import subprocess +import time + +import pytest + + +EXPECTED_LINES = [ + 'Got key=orderId1 value=100 version=1 metadata={}', + 'Got key=orderId2 value=200 version=1 metadata={}', + 'Subscribe key=orderId2 value=210 version=2 metadata={}', + 'Unsubscribed successfully? True', +] + + +@pytest.fixture() +def redis_config(): + """Seed configuration values in Redis before the test.""" + subprocess.run( + 'docker exec dapr_redis redis-cli SET orderId1 "100||1"', + shell=True, + check=True, + capture_output=True, + ) + subprocess.run( + 'docker exec dapr_redis redis-cli SET orderId2 "200||1"', + shell=True, + check=True, + capture_output=True, + ) + + +@pytest.mark.example_dir('configuration') +def test_configuration(dapr, redis_config): + proc = dapr.start( + '--app-id configexample --resources-path components/ -- python3 configuration.py', + wait=5, + ) + # Update Redis to trigger the subscription notification + subprocess.run( + 'docker exec dapr_redis redis-cli SET orderId2 "210||2"', + shell=True, + check=True, + capture_output=True, + ) + # configuration.py sleeps 10s after subscribing before it unsubscribes. + # Wait long enough for the full script to finish. + time.sleep(10) + + output = dapr.stop(proc) + for line in EXPECTED_LINES: + assert line in output, f'Missing in output: {line}' diff --git a/tests/integration/test_conversation.py b/tests/integration/test_conversation.py new file mode 100644 index 000000000..76ec5ade0 --- /dev/null +++ b/tests/integration/test_conversation.py @@ -0,0 +1,28 @@ +import pytest + +EXPECTED_LINES = [ + "Result: What's Dapr?", + 'Give a brief overview.', +] + + +@pytest.mark.example_dir('conversation') +def test_conversation_alpha1(dapr): + output = dapr.run( + '--app-id conversation-alpha1 --log-level debug --resources-path ./config ' + '-- python3 conversation_alpha1.py', + timeout=60, + ) + for line in EXPECTED_LINES: + assert line in output, f'Missing in output: {line}' + + +@pytest.mark.example_dir('conversation') +def test_conversation_alpha2(dapr): + output = dapr.run( + '--app-id conversation-alpha2 --log-level debug --resources-path ./config ' + '-- python3 conversation_alpha2.py', + timeout=60, + ) + for line in EXPECTED_LINES: + assert line in output, f'Missing in output: {line}' diff --git a/tests/integration/test_crypto.py b/tests/integration/test_crypto.py new file mode 100644 index 000000000..1b7a4c527 --- /dev/null +++ b/tests/integration/test_crypto.py @@ -0,0 +1,64 @@ +import shutil +import subprocess +from pathlib import Path + +import pytest + +REPO_ROOT = Path(__file__).resolve().parent.parent.parent +CRYPTO_DIR = REPO_ROOT / 'examples' / 'crypto' + +EXPECTED_COMMON = [ + 'Running encrypt/decrypt operation on string', + 'Decrypted the message, got 24 bytes', + 'The secret is "passw0rd"', + 'Running encrypt/decrypt operation on file', + 'Wrote encrypted data to encrypted.out', + 'Wrote decrypted data to decrypted.out.jpg', +] + + +@pytest.fixture() +def crypto_keys(): + keys_dir = CRYPTO_DIR / 'keys' + keys_dir.mkdir(exist_ok=True) + subprocess.run( + 'openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:4096 ' + '-out keys/rsa-private-key.pem', + shell=True, + cwd=CRYPTO_DIR, + check=True, + capture_output=True, + ) + subprocess.run( + 'openssl rand -out keys/symmetric-key-256 32', + shell=True, + cwd=CRYPTO_DIR, + check=True, + capture_output=True, + ) + yield + shutil.rmtree(keys_dir, ignore_errors=True) + (CRYPTO_DIR / 'encrypted.out').unlink(missing_ok=True) + (CRYPTO_DIR / 'decrypted.out.jpg').unlink(missing_ok=True) + + +@pytest.mark.example_dir('crypto') +def test_crypto(dapr, crypto_keys): + output = dapr.run( + '--app-id crypto --resources-path ./components/ -- python3 crypto.py', + timeout=30, + ) + assert 'Running gRPC client synchronous API' in output + for line in EXPECTED_COMMON: + assert line in output, f'Missing in output: {line}' + + +@pytest.mark.example_dir('crypto') +def test_crypto_async(dapr, crypto_keys): + output = dapr.run( + '--app-id crypto-async --resources-path ./components/ -- python3 crypto-async.py', + timeout=30, + ) + assert 'Running gRPC client asynchronous API' in output + for line in EXPECTED_COMMON: + assert line in output, f'Missing in output: {line}' diff --git a/tests/integration/test_demo_actor.py b/tests/integration/test_demo_actor.py new file mode 100644 index 000000000..bef8e5476 --- /dev/null +++ b/tests/integration/test_demo_actor.py @@ -0,0 +1,45 @@ +import pytest + +EXPECTED_SERVICE = [ + 'Activate DemoActor actor!', + 'has_value: False', + "set_my_data: {'data': 'new_data'}", + 'has_value: True', + 'set reminder to True', + 'set reminder is done', + 'set_timer to True', + 'set_timer is done', + 'clear_my_data', +] + +EXPECTED_CLIENT = [ + 'call actor method via proxy.invoke_method()', + "b'null'", + 'call actor method using rpc style', + 'None', + 'call SetMyData actor method to save the state', + 'call GetMyData actor method to get the state', + 'Register reminder', + 'Register timer', + 'stop reminder', + 'stop timer', + 'clear actor state', +] + + +@pytest.mark.example_dir('demo_actor/demo_actor') +def test_demo_actor(dapr): + service = dapr.start( + '--app-id demo-actor --app-port 3000 -- uvicorn --port 3000 demo_actor_service:app', + wait=10, + ) + client_output = dapr.run( + '--app-id demo-client python3 demo_actor_client.py', + timeout=60, + ) + for line in EXPECTED_CLIENT: + assert line in client_output, f'Missing in client output: {line}' + + service_output = dapr.stop(service) + for line in EXPECTED_SERVICE: + assert line in service_output, f'Missing in service output: {line}' diff --git a/tests/integration/test_distributed_lock.py b/tests/integration/test_distributed_lock.py new file mode 100644 index 000000000..1d353484d --- /dev/null +++ b/tests/integration/test_distributed_lock.py @@ -0,0 +1,21 @@ +import pytest + +EXPECTED_LINES = [ + 'Will try to acquire a lock from lock store named [lockstore]', + 'The lock is for a resource named [example-lock-resource]', + 'The client identifier is [example-client-id]', + 'The lock will will expire in 60 seconds.', + 'Lock acquired successfully!!!', + 'We already released the lock so unlocking will not work.', + 'We tried to unlock it anyway and got back [UnlockResponseStatus.lock_does_not_exist]', +] + + +@pytest.mark.example_dir('distributed_lock') +def test_distributed_lock(dapr): + output = dapr.run( + '--app-id=locksapp --app-protocol grpc --resources-path components/ python3 lock.py', + timeout=10, + ) + for line in EXPECTED_LINES: + assert line in output, f'Missing in output: {line}' diff --git a/tests/integration/test_error_handling.py b/tests/integration/test_error_handling.py new file mode 100644 index 000000000..68c46af0e --- /dev/null +++ b/tests/integration/test_error_handling.py @@ -0,0 +1,22 @@ +import pytest + +EXPECTED_LINES = [ + 'Status code: StatusCode.INVALID_ARGUMENT', + "Message: input key/keyPrefix 'key||' can't contain '||'", + 'Error code: DAPR_STATE_ILLEGAL_KEY', + 'Error info(reason): DAPR_STATE_ILLEGAL_KEY', + 'Resource info (resource type): state', + 'Resource info (resource name): statestore', + 'Bad request (field): key||', + "Bad request (description): input key/keyPrefix 'key||' can't contain '||'", +] + + +@pytest.mark.example_dir('error_handling') +def test_error_handling(dapr): + output = dapr.run( + '--resources-path components -- python3 error_handling.py', + timeout=10, + ) + for line in EXPECTED_LINES: + assert line in output, f'Missing in output: {line}' diff --git a/tests/integration/test_grpc_proxying.py b/tests/integration/test_grpc_proxying.py new file mode 100644 index 000000000..a59f03685 --- /dev/null +++ b/tests/integration/test_grpc_proxying.py @@ -0,0 +1,22 @@ +import pytest + +EXPECTED_CALLER = [ + 'Greeter client received: Hello, you!', +] + + +@pytest.mark.example_dir('grpc_proxying') +def test_grpc_proxying(dapr): + receiver = dapr.start( + '--app-id invoke-receiver --app-protocol grpc --app-port 50051 ' + '--config config.yaml -- python invoke-receiver.py', + wait=5, + ) + caller_output = dapr.run( + '--app-id invoke-caller --dapr-grpc-port 50007 --config config.yaml -- python invoke-caller.py', + timeout=30, + ) + for line in EXPECTED_CALLER: + assert line in caller_output, f'Missing in caller output: {line}' + + dapr.stop(receiver) diff --git a/tests/integration/test_invoke_binding.py b/tests/integration/test_invoke_binding.py new file mode 100644 index 000000000..e2a39b2c7 --- /dev/null +++ b/tests/integration/test_invoke_binding.py @@ -0,0 +1,66 @@ +import json +import subprocess +import time +import urllib.request +from pathlib import Path + +import pytest + +REPO_ROOT = Path(__file__).resolve().parent.parent.parent +BINDING_DIR = REPO_ROOT / 'examples' / 'invoke-binding' + +EXPECTED_MESSAGES = [ + '{"id":1,"message":"hello world"}', + '{"id":2,"message":"hello world"}', + '{"id":3,"message":"hello world"}', +] + + +@pytest.fixture() +def kafka(): + subprocess.run( + 'docker compose -f ./docker-compose-single-kafka.yml up -d', + shell=True, + cwd=BINDING_DIR, + check=True, + capture_output=True, + ) + time.sleep(30) + yield + subprocess.run( + 'docker compose -f ./docker-compose-single-kafka.yml down', + shell=True, + cwd=BINDING_DIR, + capture_output=True, + ) + + +@pytest.mark.example_dir('invoke-binding') +def test_invoke_binding(dapr, kafka): + receiver = dapr.start( + '--app-id receiver --app-protocol grpc --app-port 50051 ' + '--dapr-http-port 3500 --resources-path ./components python3 invoke-input-binding.py', + wait=5, + ) + + # Publish through the receiver's sidecar (both scripts are infinite, + # so we reimplement the publisher here with a bounded loop). + for n in range(1, 4): + body = json.dumps( + { + 'operation': 'create', + 'data': {'id': n, 'message': 'hello world'}, + } + ).encode() + req = urllib.request.Request( + 'http://localhost:3500/v1.0/bindings/kafkaBinding', + data=body, + headers={'Content-Type': 'application/json'}, + ) + urllib.request.urlopen(req) + time.sleep(1) + + time.sleep(5) + receiver_output = dapr.stop(receiver) + for line in EXPECTED_MESSAGES: + assert line in receiver_output, f'Missing in receiver output: {line}' diff --git a/tests/integration/test_invoke_custom_data.py b/tests/integration/test_invoke_custom_data.py new file mode 100644 index 000000000..11acdc106 --- /dev/null +++ b/tests/integration/test_invoke_custom_data.py @@ -0,0 +1,29 @@ +import pytest + +EXPECTED_RECEIVER = [ + 'SOME_DATA', +] + +EXPECTED_CALLER = [ + 'isSuccess: true', + 'code: 200', + 'message: "Hello World - Success!"', +] + + +@pytest.mark.example_dir('invoke-custom-data') +def test_invoke_custom_data(dapr): + receiver = dapr.start( + '--app-id invoke-receiver --app-protocol grpc --app-port 50051 python3 invoke-receiver.py', + wait=5, + ) + caller_output = dapr.run( + '--app-id invoke-caller --app-protocol grpc python3 invoke-caller.py', + timeout=30, + ) + for line in EXPECTED_CALLER: + assert line in caller_output, f'Missing in caller output: {line}' + + receiver_output = dapr.stop(receiver) + for line in EXPECTED_RECEIVER: + assert line in receiver_output, f'Missing in receiver output: {line}' diff --git a/tests/integration/test_invoke_simple.py b/tests/integration/test_invoke_simple.py new file mode 100644 index 000000000..6852f29d1 --- /dev/null +++ b/tests/integration/test_invoke_simple.py @@ -0,0 +1,36 @@ +import json +import urllib.request + +import pytest + +EXPECTED_RECEIVER = [ + '{"id": 1, "message": "hello world"}', +] + + +@pytest.mark.example_dir('invoke-simple') +def test_invoke_simple(dapr): + receiver = dapr.start( + '--app-id invoke-receiver --app-protocol grpc --app-port 50051 ' + '--dapr-http-port 3500 python3 invoke-receiver.py', + wait=5, + ) + + # invoke-caller.py runs an infinite loop, so we invoke the method + # directly through the sidecar HTTP API with a single call. + req_data = json.dumps({'id': 1, 'message': 'hello world'}).encode() + req = urllib.request.Request( + 'http://localhost:3500/v1.0/invoke/invoke-receiver/method/my-method', + data=req_data, + headers={'Content-Type': 'application/json'}, + ) + with urllib.request.urlopen(req) as resp: + content_type = resp.headers.get('Content-Type', '') + body = resp.read().decode() + + assert 'text/plain' in content_type + assert 'INVOKE_RECEIVED' in body + + receiver_output = dapr.stop(receiver) + for line in EXPECTED_RECEIVER: + assert line in receiver_output, f'Missing in receiver output: {line}' diff --git a/tests/integration/test_jobs.py b/tests/integration/test_jobs.py new file mode 100644 index 000000000..61eb67560 --- /dev/null +++ b/tests/integration/test_jobs.py @@ -0,0 +1,50 @@ +import pytest + +EXPECTED_MANAGEMENT = [ + '0. Scheduling a simple job without data...', + 'Simple job scheduled successfully', + '1. Scheduling a recurring job with cron schedule...', + 'Recurring job scheduled successfully', + '2. Scheduling a one-time job with due_time...', + 'One-time job scheduled successfully', + '3. Scheduling jobs with failure policies...', + 'Job with drop failure policy scheduled successfully', + 'Job with constant retry policy scheduled successfully', + '4. Getting job details...', + 'Retrieved job details:', + '5. Cleaning up - deleting jobs...', + 'Deleted job: simple-job', + 'Deleted job: recurring-hello-job', + 'Deleted job: one-time-hello-job', + 'Deleted job: drop-policy-job', + 'Deleted job: retry-policy-job', +] + +EXPECTED_PROCESSING = [ + 'Dapr Jobs Example', + 'Starting gRPC server on port 50051...', + 'Scheduling jobs...', + 'hello-job scheduled', + 'data-job scheduled', + 'Jobs scheduled! Waiting for execution...', + 'Job event received: hello-job', + 'Data job event received: data-job', +] + + +@pytest.mark.example_dir('jobs') +def test_job_management(dapr): + output = dapr.run('--app-id jobs-example -- python3 job_management.py', timeout=30) + for line in EXPECTED_MANAGEMENT: + assert line in output, f'Missing in output: {line}' + + +@pytest.mark.example_dir('jobs') +def test_job_processing(dapr): + proc = dapr.start( + '--app-id jobs-workflow --app-protocol grpc --app-port 50051 python3 job_processing.py', + wait=15, + ) + output = dapr.stop(proc) + for line in EXPECTED_PROCESSING: + assert line in output, f'Missing in output: {line}' diff --git a/tests/integration/test_langgraph_checkpointer.py b/tests/integration/test_langgraph_checkpointer.py new file mode 100644 index 000000000..93f758472 --- /dev/null +++ b/tests/integration/test_langgraph_checkpointer.py @@ -0,0 +1,21 @@ +import pytest + +langchain_ollama = pytest.importorskip('langchain_ollama', reason='langchain-ollama not installed') + +EXPECTED_LINES = [ + 'Add 3 and 4.', + '7', + '14', +] + + +@pytest.mark.example_dir('langgraph-checkpointer') +def test_langgraph_checkpointer(dapr): + proc = dapr.start( + '--app-id langgraph-checkpointer --app-port 5001 --dapr-grpc-port 5002 ' + '--resources-path ./components -- python3 agent.py', + wait=15, + ) + output = dapr.stop(proc) + for line in EXPECTED_LINES: + assert line in output, f'Missing in output: {line}' diff --git a/tests/integration/test_metadata.py b/tests/integration/test_metadata.py new file mode 100644 index 000000000..5beef49da --- /dev/null +++ b/tests/integration/test_metadata.py @@ -0,0 +1,22 @@ +import pytest + +EXPECTED_LINES = [ + 'First, we will assign a new custom label to Dapr sidecar', + "Now, we will fetch the sidecar's metadata", + 'And this is what we got:', + 'application_id: my-metadata-app', + 'active_actors_count: {}', + 'registered_components:', + 'We will update our custom label value and check it was persisted', + 'We added a custom label named [is-this-our-metadata-example]', +] + + +@pytest.mark.example_dir('metadata') +def test_metadata(dapr): + output = dapr.run( + '--app-id=my-metadata-app --app-protocol grpc --resources-path components/ python3 app.py', + timeout=10, + ) + for line in EXPECTED_LINES: + assert line in output, f'Missing in output: {line}' diff --git a/tests/integration/test_pubsub_simple.py b/tests/integration/test_pubsub_simple.py new file mode 100644 index 000000000..1b4f84981 --- /dev/null +++ b/tests/integration/test_pubsub_simple.py @@ -0,0 +1,43 @@ +import pytest + +EXPECTED_SUBSCRIBER = [ + 'Subscriber received: id=1, message="hello world", content_type="application/json"', + 'Subscriber received: id=2, message="hello world", content_type="application/json"', + 'Subscriber received: id=3, message="hello world", content_type="application/json"', + 'Wildcard-Subscriber received: id=4, message="hello world", content_type="application/json"', + 'Wildcard-Subscriber received: id=5, message="hello world", content_type="application/json"', + 'Wildcard-Subscriber received: id=6, message="hello world", content_type="application/json"', + 'Dead-Letter Subscriber received: id=7, message="hello world", content_type="application/json"', + 'Dead-Letter Subscriber. Received via deadletter topic: TOPIC_D_DEAD', + 'Dead-Letter Subscriber. Originally intended topic: TOPIC_D', + 'Subscriber received: TOPIC_CE', +] + +EXPECTED_PUBLISHER = [ + "{'id': 1, 'message': 'hello world'}", + "{'id': 2, 'message': 'hello world'}", + "{'id': 3, 'message': 'hello world'}", + 'Bulk published 3 events. Failed entries: 0', +] + + +@pytest.mark.example_dir('pubsub-simple') +def test_pubsub_simple(dapr): + subscriber = dapr.start( + '--app-id python-subscriber --app-protocol grpc --app-port 50051 -- python3 subscriber.py', + wait=5, + ) + publisher_output = dapr.run( + '--app-id python-publisher --app-protocol grpc --dapr-grpc-port=3500 ' + '--enable-app-health-check -- python3 publisher.py', + timeout=30, + ) + for line in EXPECTED_PUBLISHER: + assert line in publisher_output, f'Missing in publisher output: {line}' + + import time + + time.sleep(5) + subscriber_output = dapr.stop(subscriber) + for line in EXPECTED_SUBSCRIBER: + assert line in subscriber_output, f'Missing in subscriber output: {line}' diff --git a/tests/integration/test_pubsub_streaming.py b/tests/integration/test_pubsub_streaming.py new file mode 100644 index 000000000..81b3055bd --- /dev/null +++ b/tests/integration/test_pubsub_streaming.py @@ -0,0 +1,69 @@ +import time + +import pytest + +EXPECTED_SUBSCRIBER = [ + "Processing message: {'id': 1, 'message': 'hello world'} from TOPIC_A1...", + "Processing message: {'id': 2, 'message': 'hello world'} from TOPIC_A1...", + "Processing message: {'id': 3, 'message': 'hello world'} from TOPIC_A1...", + "Processing message: {'id': 4, 'message': 'hello world'} from TOPIC_A1...", + "Processing message: {'id': 5, 'message': 'hello world'} from TOPIC_A1...", + 'Closing subscription...', +] + +EXPECTED_HANDLER_SUBSCRIBER = [ + "Processing message: {'id': 1, 'message': 'hello world'} from TOPIC_A2...", + "Processing message: {'id': 2, 'message': 'hello world'} from TOPIC_A2...", + "Processing message: {'id': 3, 'message': 'hello world'} from TOPIC_A2...", + "Processing message: {'id': 4, 'message': 'hello world'} from TOPIC_A2...", + "Processing message: {'id': 5, 'message': 'hello world'} from TOPIC_A2...", + 'Closing subscription...', +] + +EXPECTED_PUBLISHER = [ + "{'id': 1, 'message': 'hello world'}", + "{'id': 2, 'message': 'hello world'}", + "{'id': 3, 'message': 'hello world'}", + "{'id': 4, 'message': 'hello world'}", + "{'id': 5, 'message': 'hello world'}", +] + + +@pytest.mark.example_dir('pubsub-streaming') +def test_pubsub_streaming(dapr): + subscriber = dapr.start( + '--app-id python-subscriber --app-protocol grpc -- python3 subscriber.py --topic=TOPIC_A1', + wait=5, + ) + publisher_output = dapr.run( + '--app-id python-publisher --app-protocol grpc --dapr-grpc-port=3500 ' + '--enable-app-health-check -- python3 publisher.py --topic=TOPIC_A1', + timeout=30, + ) + for line in EXPECTED_PUBLISHER: + assert line in publisher_output, f'Missing in publisher output: {line}' + + time.sleep(5) + subscriber_output = dapr.stop(subscriber) + for line in EXPECTED_SUBSCRIBER: + assert line in subscriber_output, f'Missing in subscriber output: {line}' + + +@pytest.mark.example_dir('pubsub-streaming') +def test_pubsub_streaming_handler(dapr): + subscriber = dapr.start( + '--app-id python-subscriber --app-protocol grpc -- python3 subscriber-handler.py --topic=TOPIC_A2', + wait=5, + ) + publisher_output = dapr.run( + '--app-id python-publisher --app-protocol grpc --dapr-grpc-port=3500 ' + '--enable-app-health-check -- python3 publisher.py --topic=TOPIC_A2', + timeout=30, + ) + for line in EXPECTED_PUBLISHER: + assert line in publisher_output, f'Missing in publisher output: {line}' + + time.sleep(5) + subscriber_output = dapr.stop(subscriber) + for line in EXPECTED_HANDLER_SUBSCRIBER: + assert line in subscriber_output, f'Missing in subscriber output: {line}' diff --git a/tests/integration/test_pubsub_streaming_async.py b/tests/integration/test_pubsub_streaming_async.py new file mode 100644 index 000000000..d98a9a670 --- /dev/null +++ b/tests/integration/test_pubsub_streaming_async.py @@ -0,0 +1,69 @@ +import time + +import pytest + +EXPECTED_SUBSCRIBER = [ + "Processing message: {'id': 1, 'message': 'hello world'} from TOPIC_B1...", + "Processing message: {'id': 2, 'message': 'hello world'} from TOPIC_B1...", + "Processing message: {'id': 3, 'message': 'hello world'} from TOPIC_B1...", + "Processing message: {'id': 4, 'message': 'hello world'} from TOPIC_B1...", + "Processing message: {'id': 5, 'message': 'hello world'} from TOPIC_B1...", + 'Closing subscription...', +] + +EXPECTED_HANDLER_SUBSCRIBER = [ + "Processing message: {'id': 1, 'message': 'hello world'} from TOPIC_B2...", + "Processing message: {'id': 2, 'message': 'hello world'} from TOPIC_B2...", + "Processing message: {'id': 3, 'message': 'hello world'} from TOPIC_B2...", + "Processing message: {'id': 4, 'message': 'hello world'} from TOPIC_B2...", + "Processing message: {'id': 5, 'message': 'hello world'} from TOPIC_B2...", + 'Closing subscription...', +] + +EXPECTED_PUBLISHER = [ + "{'id': 1, 'message': 'hello world'}", + "{'id': 2, 'message': 'hello world'}", + "{'id': 3, 'message': 'hello world'}", + "{'id': 4, 'message': 'hello world'}", + "{'id': 5, 'message': 'hello world'}", +] + + +@pytest.mark.example_dir('pubsub-streaming-async') +def test_pubsub_streaming_async(dapr): + subscriber = dapr.start( + '--app-id python-subscriber --app-protocol grpc -- python3 subscriber.py --topic=TOPIC_B1', + wait=5, + ) + publisher_output = dapr.run( + '--app-id python-publisher --app-protocol grpc --dapr-grpc-port=3500 ' + '--enable-app-health-check -- python3 publisher.py --topic=TOPIC_B1', + timeout=30, + ) + for line in EXPECTED_PUBLISHER: + assert line in publisher_output, f'Missing in publisher output: {line}' + + time.sleep(5) + subscriber_output = dapr.stop(subscriber) + for line in EXPECTED_SUBSCRIBER: + assert line in subscriber_output, f'Missing in subscriber output: {line}' + + +@pytest.mark.example_dir('pubsub-streaming-async') +def test_pubsub_streaming_async_handler(dapr): + subscriber = dapr.start( + '--app-id python-subscriber --app-protocol grpc -- python3 subscriber-handler.py --topic=TOPIC_B2', + wait=5, + ) + publisher_output = dapr.run( + '--app-id python-publisher --app-protocol grpc --dapr-grpc-port=3500 ' + '--enable-app-health-check -- python3 publisher.py --topic=TOPIC_B2', + timeout=30, + ) + for line in EXPECTED_PUBLISHER: + assert line in publisher_output, f'Missing in publisher output: {line}' + + time.sleep(5) + subscriber_output = dapr.stop(subscriber) + for line in EXPECTED_HANDLER_SUBSCRIBER: + assert line in subscriber_output, f'Missing in subscriber output: {line}' diff --git a/tests/integration/test_secret_store.py b/tests/integration/test_secret_store.py new file mode 100644 index 000000000..f14baf0eb --- /dev/null +++ b/tests/integration/test_secret_store.py @@ -0,0 +1,33 @@ +import pytest + +EXPECTED_LINES = [ + "{'secretKey': 'secretValue'}", + "[('random', {'random': 'randomValue'}), ('secretKey', {'secretKey': 'secretValue'})]", + "{'random': 'randomValue'}", +] + +EXPECTED_LINES_WITH_ACL = [ + "{'secretKey': 'secretValue'}", + "[('secretKey', {'secretKey': 'secretValue'})]", + 'Got expected error for accessing random key', +] + + +@pytest.mark.example_dir('secret_store') +def test_secret_store(dapr): + output = dapr.run( + '--app-id=secretsapp --app-protocol grpc --resources-path components/ -- python3 example.py', + timeout=30, + ) + for line in EXPECTED_LINES: + assert line in output, f'Missing in output: {line}' + + +@pytest.mark.example_dir('secret_store') +def test_secret_store_with_access_control(dapr): + output = dapr.run( + '--app-id=secretsapp --app-protocol grpc --config config.yaml --resources-path components/ -- python3 example.py', + timeout=30, + ) + for line in EXPECTED_LINES_WITH_ACL: + assert line in output, f'Missing in output: {line}' diff --git a/tests/integration/test_state_store.py b/tests/integration/test_state_store.py new file mode 100644 index 000000000..1f22f6541 --- /dev/null +++ b/tests/integration/test_state_store.py @@ -0,0 +1,26 @@ +import pytest + +from conftest import assert_lines_in_output + +EXPECTED_LINES = [ + 'State store has successfully saved value_1 with key_1 as key', + 'Cannot save due to bad etag. ErrorCode=StatusCode.ABORTED', + 'State store has successfully saved value_2 with key_2 as key', + 'State store has successfully saved value_3 with key_3 as key', + 'Cannot save bulk due to bad etags. ErrorCode=StatusCode.ABORTED', + "Got value=b'value_1' eTag=1", + "Got items with etags: [(b'value_1_updated', '2'), (b'value_2', '2')]", + 'Transaction with outbox pattern executed successfully!', + "Got value after outbox pattern: b'val1'", + "Got values after transaction delete: [b'', b'']", + "Got value after delete: b''", +] + + +@pytest.mark.example_dir('state_store') +def test_state_store(dapr): + output = dapr.run( + '--resources-path components/ -- python3 state_store.py', + timeout=30, + ) + assert_lines_in_output(output, EXPECTED_LINES, ordered=True) diff --git a/tests/integration/test_state_store_query.py b/tests/integration/test_state_store_query.py new file mode 100644 index 000000000..02319cca6 --- /dev/null +++ b/tests/integration/test_state_store_query.py @@ -0,0 +1,55 @@ +import subprocess +from pathlib import Path + +import pytest + +REPO_ROOT = Path(__file__).resolve().parent.parent.parent +EXAMPLE_DIR = REPO_ROOT / 'examples' / 'state_store_query' + +EXPECTED_LINES = [ + '1 {"city": "Seattle", "person": {"id": 1036.0, "org": "Dev Ops"}, "state": "WA"}', + '4 {"city": "Spokane", "person": {"id": 1042.0, "org": "Dev Ops"}, "state": "WA"}', + '10 {"city": "New York", "person": {"id": 1054.0, "org": "Dev Ops"}, "state": "NY"}', + 'Token: 3', + '9 {"city": "San Diego", "person": {"id": 1002.0, "org": "Finance"}, "state": "CA"}', + '7 {"city": "San Francisco", "person": {"id": 1015.0, "org": "Dev Ops"}, "state": "CA"}', + '3 {"city": "Sacramento", "person": {"id": 1071.0, "org": "Finance"}, "state": "CA"}', + 'Token: 6', +] + + +@pytest.fixture() +def mongodb(): + subprocess.run( + 'docker run -d --rm -p 27017:27017 --name mongodb mongo:5', + shell=True, + check=True, + capture_output=True, + ) + yield + subprocess.run( + 'docker kill mongodb', + shell=True, + capture_output=True, + ) + + +@pytest.fixture() +def import_data(mongodb, dapr): + """Import the test dataset into the state store via Dapr.""" + dapr.run( + '--app-id demo --dapr-http-port 3500 --resources-path components ' + '-- curl -X POST -H "Content-Type: application/json" ' + '-d @dataset.json http://localhost:3500/v1.0/state/statestore', + timeout=15, + ) + + +@pytest.mark.example_dir('state_store_query') +def test_state_store_query(dapr, import_data): + output = dapr.run( + '--app-id queryexample --resources-path components/ -- python3 state_store_query.py', + timeout=10, + ) + for line in EXPECTED_LINES: + assert line in output, f'Missing in output: {line}' diff --git a/tests/integration/test_w3c_tracing.py b/tests/integration/test_w3c_tracing.py new file mode 100644 index 000000000..4cd53780a --- /dev/null +++ b/tests/integration/test_w3c_tracing.py @@ -0,0 +1,25 @@ +import pytest + +EXPECTED_CALLER = [ + 'application/json', + 'SAY', + 'text/plain', + 'SLEEP', + 'Trace ID matches after forwarding', +] + + +@pytest.mark.example_dir('w3c-tracing') +def test_w3c_tracing(dapr): + receiver = dapr.start( + '--app-id invoke-receiver --app-protocol grpc --app-port 3001 python3 invoke-receiver.py', + wait=5, + ) + caller_output = dapr.run( + '--app-id invoke-caller --app-protocol grpc python3 invoke-caller.py', + timeout=30, + ) + for line in EXPECTED_CALLER: + assert line in caller_output, f'Missing in caller output: {line}' + + dapr.stop(receiver) diff --git a/tests/integration/test_workflow.py b/tests/integration/test_workflow.py new file mode 100644 index 000000000..b05b3a299 --- /dev/null +++ b/tests/integration/test_workflow.py @@ -0,0 +1,46 @@ +import pytest + +EXPECTED_TASK_CHAINING = [ + 'Step 1: Received input: 42.', + 'Step 2: Received input: 43.', + 'Step 3: Received input: 86.', + 'Workflow completed! Status: WorkflowStatus.COMPLETED', +] + +EXPECTED_FAN_OUT_FAN_IN = [ + 'Final result: 110.', +] + +EXPECTED_SIMPLE = [ + 'Hi Counter!', + 'New counter value is: 1!', + 'New counter value is: 11!', + 'Retry count value is: 0!', + 'Retry count value is: 1! This print statement verifies retry', + 'Get response from hello_world_wf after pause call: SUSPENDED', + 'Get response from hello_world_wf after resume call: RUNNING', + 'New counter value is: 111!', + 'New counter value is: 1111!', + 'Workflow completed! Result: Completed', +] + + +@pytest.mark.example_dir('workflow') +def test_task_chaining(dapr): + output = dapr.run('-- python3 task_chaining.py', timeout=30) + for line in EXPECTED_TASK_CHAINING: + assert line in output, f'Missing in output: {line}' + + +@pytest.mark.example_dir('workflow') +def test_fan_out_fan_in(dapr): + output = dapr.run('-- python3 fan_out_fan_in.py', timeout=60) + for line in EXPECTED_FAN_OUT_FAN_IN: + assert line in output, f'Missing in output: {line}' + + +@pytest.mark.example_dir('workflow') +def test_simple_workflow(dapr): + output = dapr.run('-- python3 simple.py', timeout=60) + for line in EXPECTED_SIMPLE: + assert line in output, f'Missing in output: {line}' diff --git a/tox.ini b/tox.ini index 698e383e9..93cfea2b2 100644 --- a/tox.ini +++ b/tox.ini @@ -38,71 +38,30 @@ commands = ruff check --fix ruff format -[testenv:examples] +[testenv:integration] +; Pytest-based integration tests that validate the examples/ directory. +; Usage: tox -e integration # run all +; tox -e integration -- test_state_store.py # run one passenv = HOME basepython = python3 -changedir = ./examples/ -deps = - mechanical-markdown - +changedir = ./tests/integration/ commands = - ./validate.sh conversation - ./validate.sh crypto - ./validate.sh metadata - ./validate.sh error_handling - ./validate.sh pubsub-simple - ./validate.sh pubsub-streaming - ./validate.sh pubsub-streaming-async - ./validate.sh state_store - ./validate.sh state_store_query - ./validate.sh secret_store - ./validate.sh invoke-simple - ./validate.sh invoke-custom-data - ./validate.sh demo_actor - ./validate.sh invoke-binding - ./validate.sh grpc_proxying - ./validate.sh w3c-tracing - ./validate.sh distributed_lock - ./validate.sh configuration - ./validate.sh workflow - ./validate.sh jobs - ./validate.sh langgraph-checkpointer - ./validate.sh ../ -allowlist_externals=* - -commands_pre = - pip uninstall -y dapr dapr-ext-grpc dapr-ext-fastapi dapr-ext-langgraph dapr-ext-strands dapr-ext-flask dapr-ext-langgraph dapr-ext-strands - pip install -e {toxinidir}/ \ - -e {toxinidir}/ext/dapr-ext-workflow/ \ - -e {toxinidir}/ext/dapr-ext-grpc/ \ - -e {toxinidir}/ext/dapr-ext-fastapi/ \ - -e {toxinidir}/ext/dapr-ext-langgraph/ \ - -e {toxinidir}/ext/dapr-ext-strands/ \ - -e {toxinidir}/ext/flask_dapr/ - -[testenv:example-component] -; This environment is used to validate a specific example component. -; Usage: tox -e example-component -- component_name -; Example: tox -e example-component -- conversation -passenv = HOME -basepython = python3 -changedir = ./examples/ -deps = - mechanical-markdown -commands = - ./validate.sh {posargs} + pytest {posargs} -v --tb=short allowlist_externals=* commands_pre = pip uninstall -y dapr dapr-ext-grpc dapr-ext-fastapi dapr-ext-langgraph dapr-ext-strands dapr-ext-flask dapr-ext-langgraph dapr-ext-strands - pip install -e {toxinidir}/ \ + pip install -r {toxinidir}/dev-requirements.txt \ + -e {toxinidir}/ \ -e {toxinidir}/ext/dapr-ext-workflow/ \ -e {toxinidir}/ext/dapr-ext-grpc/ \ -e {toxinidir}/ext/dapr-ext-fastapi/ \ -e {toxinidir}/ext/dapr-ext-langgraph/ \ -e {toxinidir}/ext/dapr-ext-strands/ \ - -e {toxinidir}/ext/flask_dapr/ + -e {toxinidir}/ext/flask_dapr/ \ + opentelemetry-exporter-zipkin \ + uvicorn [testenv:type] basepython = python3