diff --git a/CHANGELOG.md b/CHANGELOG.md index 2e51ec3c..e348c1c8 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,8 +2,8 @@ All notable changes to this project will be documented in this file. -The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), -and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project +adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). ## Unreleased @@ -20,7 +20,8 @@ ADDED package with `BlobPayloadStore` and `BlobPayloadStoreOptions` - Added `PayloadStore` abstract base class in `durabletask.payload` for custom storage backends -- Added `durabletask.testing` module with `InMemoryOrchestrationBackend` for testing orchestrations without a sidecar process +- Added `durabletask.testing` module with `InMemoryOrchestrationBackend` for testing orchestrations + without a sidecar process - Added `AsyncTaskHubGrpcClient` for asyncio-based applications using `grpc.aio` - Added `DefaultAsyncClientInterceptorImpl` for async gRPC metadata interceptors - Added `get_async_grpc_channel` helper for creating async gRPC channels @@ -28,7 +29,8 @@ ADDED - Added batch client actions for purge and query operations across orchestrations and entities - Added worker work item filtering support - Added new `work_item_filtering` sample -- Improved distributed tracing support with full span coverage for orchestrations, activities, sub-orchestrations, timers, and events +- Improved distributed tracing support with full span coverage for orchestrations, activities, + sub-orchestrations, timers, and events CHANGED @@ -101,7 +103,8 @@ FIXED: ## v0.4.1 -- Fixed an issue where orchestrations would still throw non-determinism errors even when versioning logic should have prevented it +- Fixed an issue where orchestrations would still throw non-determinism errors even when versioning + logic should have prevented it ## v0.4.0 @@ -112,7 +115,8 @@ FIXED: ADDED -- Added `ConcurrencyOptions` class for fine-grained concurrency control with separate limits for activities and orchestrations. The thread pool worker count can also be configured. +- Added `ConcurrencyOptions` class for fine-grained concurrency control with separate limits for + activities and orchestrations. The thread pool worker count can also be configured. FIXED @@ -122,15 +126,29 @@ FIXED ADDED -- Added `set_custom_status` orchestrator API ([#31](https://github.com/microsoft/durabletask-python/pull/31)) - contributed by [@famarting](https://github.com/famarting) -- Added `purge_orchestration` client API ([#34](https://github.com/microsoft/durabletask-python/pull/34)) - contributed by [@famarting](https://github.com/famarting) -- Added new `durabletask-azuremanaged` package for use with the [Durable Task Scheduler](https://learn.microsoft.com/azure/azure-functions/durable/durable-task-scheduler/durable-task-scheduler) - by [@RyanLettieri](https://github.com/RyanLettieri) +- Added `set_custom_status` orchestrator API + ([#31](https://github.com/microsoft/durabletask-python/pull/31)) - contributed by + [@famarting](https://github.com/famarting) +- Added `purge_orchestration` client API + ([#34](https://github.com/microsoft/durabletask-python/pull/34)) - contributed by + [@famarting](https://github.com/famarting) +- Added new `durabletask-azuremanaged` package for use with the [Durable Task + Scheduler](https://learn.microsoft.com/azure/azure-functions/durable/durable-task-scheduler/durable-task-scheduler) + - by [@RyanLettieri](https://github.com/RyanLettieri) CHANGED -- Protos are compiled with gRPC 1.62.3 / protobuf 3.25.X instead of the latest release. This ensures compatibility with a wider range of grpcio versions for better compatibility with other packages / libraries ([#36](https://github.com/microsoft/durabletask-python/pull/36)) - by [@berndverst](https://github.com/berndverst) -- Http and grpc protocols and their secure variants are stripped from the host name parameter if provided. Secure mode is enabled if the protocol provided is https or grpcs ([#38](https://github.com/microsoft/durabletask-python/pull/38) - by [@berndverst)(https://github.com/berndverst) -- Improve ProtoGen by downloading proto file directly instead of using submodule ([#39](https://github.com/microsoft/durabletask-python/pull/39) - by [@berndverst](https://github.com/berndverst) +- Protos are compiled with gRPC 1.62.3 / protobuf 3.25.X instead of the latest release. This ensures + compatibility with a wider range of grpcio versions for better compatibility with other packages / + libraries ([#36](https://github.com/microsoft/durabletask-python/pull/36)) - by + [@berndverst](https://github.com/berndverst) +- Http and grpc protocols and their secure variants are stripped from the host name parameter if + provided. Secure mode is enabled if the protocol provided is https or grpcs + ([#38](https://github.com/microsoft/durabletask-python/pull/38) - by + [@berndverst)(https://github.com/berndverst) +- Improve ProtoGen by downloading proto file directly instead of using submodule + ([#39](https://github.com/microsoft/durabletask-python/pull/39) - by + [@berndverst](https://github.com/berndverst) CHANGED @@ -140,45 +158,59 @@ CHANGED ADDED -- Add recursive flag in terminate_orchestration to support cascade terminate ([#27](https://github.com/microsoft/durabletask-python/pull/27)) - contributed by [@shivamkm07](https://github.com/shivamkm07) +- Add recursive flag in terminate_orchestration to support cascade terminate + ([#27](https://github.com/microsoft/durabletask-python/pull/27)) - contributed by + [@shivamkm07](https://github.com/shivamkm07) ## v0.1.0 ADDED -- Retry policies for activities and sub-orchestrations ([#11](https://github.com/microsoft/durabletask-python/pull/11)) - contributed by [@DeepanshuA](https://github.com/DeepanshuA) +- Retry policies for activities and sub-orchestrations + ([#11](https://github.com/microsoft/durabletask-python/pull/11)) - contributed by + [@DeepanshuA](https://github.com/DeepanshuA) FIXED -- Fix try/except in orchestrator functions not being handled correctly ([#21](https://github.com/microsoft/durabletask-python/pull/21)) - by [@cgillum](https://github.com/cgillum) -- Updated `durabletask-protobuf` submodule reference to latest distributed tracing commit - by [@cgillum](https://github.com/cgillum) +- Fix try/except in orchestrator functions not being handled correctly + ([#21](https://github.com/microsoft/durabletask-python/pull/21)) - by + [@cgillum](https://github.com/cgillum) +- Updated `durabletask-protobuf` submodule reference to latest distributed tracing commit - by + [@cgillum](https://github.com/cgillum) ## v0.1.0a5 ADDED -- Adds support for secure channels ([#18](https://github.com/microsoft/durabletask-python/pull/18)) - contributed by [@elena-kolevska](https://github.com/elena-kolevska) +- Adds support for secure channels ([#18](https://github.com/microsoft/durabletask-python/pull/18)) + - contributed by [@elena-kolevska](https://github.com/elena-kolevska) FIXED -- Fix zero argument values sent to activities as None ([#13](https://github.com/microsoft/durabletask-python/pull/13)) - contributed by [@DeepanshuA](https://github.com/DeepanshuA) +- Fix zero argument values sent to activities as None + ([#13](https://github.com/microsoft/durabletask-python/pull/13)) - contributed by + [@DeepanshuA](https://github.com/DeepanshuA) ## v0.1.0a3 ADDED -- Add gRPC metadata option ([#16](https://github.com/microsoft/durabletask-python/pull/16)) - contributed by [@DeepanshuA](https://github.com/DeepanshuA) +- Add gRPC metadata option ([#16](https://github.com/microsoft/durabletask-python/pull/16)) - + contributed by [@DeepanshuA](https://github.com/DeepanshuA) CHANGED -- Removed Python 3.7 support due to EOL ([#14](https://github.com/microsoft/durabletask-python/pull/14)) - contributed by [@berndverst](https://github.com/berndverst) +- Removed Python 3.7 support due to EOL + ([#14](https://github.com/microsoft/durabletask-python/pull/14)) - contributed by + [@berndverst](https://github.com/berndverst) ## v0.1.0a2 ADDED - Continue-as-new ([#9](https://github.com/microsoft/durabletask-python/pull/9)) -- Support for Python 3.7+ ([#10](https://github.com/microsoft/durabletask-python/pull/10)) - contributed by [@DeepanshuA](https://github.com/DeepanshuA) +- Support for Python 3.7+ ([#10](https://github.com/microsoft/durabletask-python/pull/10)) - + contributed by [@DeepanshuA](https://github.com/DeepanshuA) ## v0.1.0a1 diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 6c2596b8..ebba57fb 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -2,12 +2,14 @@ This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us -the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com. +the rights to use your contribution. For details, visit +[https://cla.opensource.microsoft.com](https://cla.opensource.microsoft.com). -When you submit a pull request, a CLA bot will automatically determine whether you need to provide -a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions +When you submit a pull request, a CLA bot will automatically determine whether you need to provide a +CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. -This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). -For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or -contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. \ No newline at end of file +This project has adopted the [Microsoft Open Source Code of +Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of +Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact +[opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. diff --git a/README.md b/README.md index 48a2a467..ac532ffb 100644 --- a/README.md +++ b/README.md @@ -1,37 +1,47 @@ # Durable Task SDK for Python -[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT) -[![Build Validation](https://github.com/microsoft/durabletask-python/actions/workflows/pr-validation.yml/badge.svg)](https://github.com/microsoft/durabletask-python/actions/workflows/pr-validation.yml) +[![License: +MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT) +[![Build +Validation](https://github.com/microsoft/durabletask-python/actions/workflows/pr-validation.yml/badge.svg)](https://github.com/microsoft/durabletask-python/actions/workflows/pr-validation.yml) [![PyPI version](https://badge.fury.io/py/durabletask.svg)](https://badge.fury.io/py/durabletask) -This repo contains a Python SDK for use with the [Azure Durable Task Scheduler](https://github.com/Azure/Durable-Task-Scheduler). With this SDK, you can define, schedule, and manage durable orchestrations using ordinary Python code. +This repo contains a Python SDK for use with the [Azure Durable Task +Scheduler](https://github.com/Azure/Durable-Task-Scheduler). With this SDK, you can define, +schedule, and manage durable orchestrations using ordinary Python code. -> Note that this SDK is **not** currently compatible with [Azure Durable Functions](https://learn.microsoft.com/azure/azure-functions/durable/durable-functions-overview). If you are looking for a Python SDK for Azure Durable Functions, please see [this repo](https://github.com/Azure/azure-functions-durable-python). +> Note that this SDK is **not** currently compatible with [Azure Durable +Functions](https://learn.microsoft.com/azure/azure-functions/durable/durable-functions-overview). If +you are looking for a Python SDK for Azure Durable Functions, please see [this +repo](https://github.com/Azure/azure-functions-durable-python). + +## References -# References - [Supported Patterns](./docs/supported-patterns.md) - [Available Features](./docs/features.md) - [Getting Started](./docs/getting-started.md) -- [Development Guide](./docs/development.md) +- [Development Guide](./docs/development.md) - [Contributing Guide](./CONTRIBUTING.md) ## Optional Features ### Large Payload Externalization -Install the `azure-blob-payloads` extra to automatically offload -oversized orchestration payloads to Azure Blob Storage: +Install the `azure-blob-payloads` extra to automatically offload oversized orchestration payloads to +Azure Blob Storage: ```bash pip install durabletask[azure-blob-payloads] ``` -See the [feature documentation](./docs/features.md#large-payload-externalization) -and the [example](./examples/large_payload/) for usage details. +See the [feature documentation](./docs/features.md#large-payload-externalization) and the +[example](./examples/large_payload/) for usage details. ## Trademarks -This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft -trademarks or logos is subject to and must follow -[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general). -Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. -Any use of third-party trademarks or logos are subject to those third-party's policies. + +This project may contain trademarks or logos for projects, products, or services. Authorized use of +Microsoft trademarks or logos is subject to and must follow [Microsoft's Trademark & Brand +Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general). +Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion +or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those +third-party's policies. diff --git a/SECURITY.md b/SECURITY.md index e138ec5d..98c08264 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -1,34 +1,52 @@ +# Security + -## Security +## Overview -Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/). +Microsoft takes the security of our software products and services seriously, which includes all +source code repositories managed through our GitHub organizations, which include +[Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), +[DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), +[Xamarin](https://github.com/xamarin), and [our GitHub +organizations](https://opensource.microsoft.com/). -If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below. +If you believe you have found a security vulnerability in any Microsoft-owned repository that meets +[Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), +please report it to us as described below. ## Reporting Security Issues **Please do not report security vulnerabilities through public GitHub issues.** -Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report). +Instead, please report them to the Microsoft Security Response Center (MSRC) at +[https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report). -If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey). +If you prefer to submit without logging in, send email to +[secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP +key; please download it from the [Microsoft Security Response Center PGP Key +page](https://aka.ms/opensource/security/pgpkey). -You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc). +You should receive a response within 24 hours. If for some reason you do not, please follow up via +email to ensure we received your original message. Additional information can be found at +[microsoft.com/msrc](https://aka.ms/opensource/security/msrc). -Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue: +Please include the requested information listed below (as much as you can provide) to help us better +understand the nature and scope of the possible issue: - * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.) - * Full paths of source file(s) related to the manifestation of the issue - * The location of the affected source code (tag/branch/commit or direct URL) - * Any special configuration required to reproduce the issue - * Step-by-step instructions to reproduce the issue - * Proof-of-concept or exploit code (if possible) - * Impact of the issue, including how an attacker might exploit the issue +* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.) +* Full paths of source file(s) related to the manifestation of the issue +* The location of the affected source code (tag/branch/commit or direct URL) +* Any special configuration required to reproduce the issue +* Step-by-step instructions to reproduce the issue +* Proof-of-concept or exploit code (if possible) +* Impact of the issue, including how an attacker might exploit the issue This information will help us triage your report more quickly. -If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs. +If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty +award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) +page for more details about our active programs. ## Preferred Languages @@ -36,6 +54,7 @@ We prefer all communications to be in English. ## Policy -Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd). +Microsoft follows the principle of [Coordinated Vulnerability +Disclosure](https://aka.ms/opensource/security/cvd). diff --git a/SUPPORT.md b/SUPPORT.md index 291d4d43..ee82a8e7 100644 --- a/SUPPORT.md +++ b/SUPPORT.md @@ -1,25 +1,29 @@ -# TODO: The maintainer of this repo has not yet edited this file +# Support + +> [!WARNING] +> The maintainer of this repo has not yet edited this file. **REPO OWNER**: Do you want Customer Service & Support (CSS) support for this product/project? - **No CSS support:** Fill out this template with information about how to file issues and get help. -- **Yes CSS support:** Fill out an intake form at [aka.ms/onboardsupport](https://aka.ms/onboardsupport). CSS will work with/help you to determine next steps. +- **Yes CSS support:** Fill out an intake form at + [aka.ms/onboardsupport](https://aka.ms/onboardsupport). CSS will work with/help you to determine + next steps. - **Not sure?** Fill out an intake as though the answer were "Yes". CSS will help you decide. -*Then remove this first heading from this SUPPORT.MD file before publishing your repo.* - -# Support +Then remove this warning and placeholder instructions from this SUPPORT.md file before publishing +your repo. -## How to file issues and get help +## How to file issues and get help -This project uses GitHub Issues to track bugs and feature requests. Please search the existing -issues before filing new issues to avoid duplicates. For new issues, file your bug or -feature request as a new Issue. +This project uses GitHub Issues to track bugs and feature requests. Please search the existing +issues before filing new issues to avoid duplicates. For new issues, file your bug or feature +request as a new Issue. -For help and questions about using this project, please **REPO MAINTAINER: INSERT INSTRUCTIONS HERE -FOR HOW TO ENGAGE REPO OWNERS OR COMMUNITY FOR HELP. COULD BE A STACK OVERFLOW TAG OR OTHER -CHANNEL. WHERE WILL YOU HELP PEOPLE?**. +For help and questions about using this project, please **REPO MAINTAINER: INSERT INSTRUCTIONS HERE +FOR HOW TO ENGAGE REPO OWNERS OR COMMUNITY FOR HELP. COULD BE A STACK OVERFLOW TAG OR OTHER CHANNEL. +WHERE WILL YOU HELP PEOPLE?**. -## Microsoft Support Policy +## Microsoft Support Policy Support for this **PROJECT or PRODUCT** is limited to the resources listed above. diff --git a/docs/development.md b/docs/development.md index dc0f88fd..e8cacb42 100644 --- a/docs/development.md +++ b/docs/development.md @@ -1,15 +1,19 @@ # Development -The following is more information about how to develop this project. Note that development commands require that `make` is installed on your local machine. If you're using Windows, you can install `make` using [Chocolatey](https://chocolatey.org/) or use WSL. +The following is more information about how to develop this project. Note that development commands +require that `make` is installed on your local machine. If you're using Windows, you can install +`make` using [Chocolatey](https://chocolatey.org/) or use WSL. -### Generating protobufs +## Generating protobufs ```sh pip3 install -r dev-requirements.txt make gen-proto ``` -This will download the `orchestrator_service.proto` from the `microsoft/durabletask-protobuf` repo and compile it using `grpcio-tools`. The version of the source proto file that was downloaded can be found in the file `durabletask/internal/PROTO_SOURCE_COMMIT_HASH`. +This will download the `orchestrator_service.proto` from the `microsoft/durabletask-protobuf` repo +and compile it using `grpcio-tools`. The version of the source proto file that was downloaded can be +found in the file `durabletask/internal/PROTO_SOURCE_COMMIT_HASH`. ### Running tests @@ -17,4 +21,4 @@ Tests can be run using the following command from the project root. ```sh make test -``` \ No newline at end of file +``` diff --git a/docs/features.md b/docs/features.md index ba836c9b..1b00e428 100644 --- a/docs/features.md +++ b/docs/features.md @@ -2,61 +2,114 @@ The following features are currently supported: -### Orchestrations - -Orchestrators are implemented using ordinary Python functions that take an `OrchestrationContext` as their first parameter. The `OrchestrationContext` provides APIs for starting child orchestrations, scheduling activities, and waiting for external events, among other things. Orchestrations are fault-tolerant and durable, meaning that they can automatically recover from failures and rebuild their local execution state. Orchestrator functions must be deterministic, meaning that they must always produce the same output given the same input. - -#### Orchestration versioning - -Orchestrations may be assigned a version when they are first created. If an orchestration is given a version, it will continually be checked during its lifecycle to ensure that it remains compatible with the underlying orchestrator code. If the orchestrator code is updated while an orchestration is running, rules can be set that will define the behavior - whether the orchestration should fail, abandon for reprocessing at a later time, or attempt to run anyway. For more information, see [The provided examples](./supported-patterns.md). For more information about versioning in the context of Durable Functions, see [Orchestration versioning in Durable Functions](https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-orchestration-versioning) (Note that concepts specific to Azure Functions, such as host.json settings, do not apply to this SDK). - -##### Orchestration versioning options - -Both the Durable worker and durable client have versioning configuration available. Because versioning checks are handled by the worker, the only information the client needs is a default_version, taken in its constructor, to use as the version for new orchestrations unless otherwise specified. The worker takes a VersioningOptions object with a `default_version` for new sub-orchestrations, a `version` used by the worker for orchestration version comparisons, and two more options giving control over versioning behavior in case of match failures, a `VersionMatchStrategy` and `VersionFailureStrategy`. - -**VersionMatchStrategy** - -| VersionMatchStrategy.NONE | VersionMatchStrategy.STRICT | VersionMatchStrategy.CURRENT_OR_OLDER | -|-|-|-| -| Do not compare orchestration versions | Only allow orchestrations with the same version as the worker | Allow orchestrations with the same or older version as the worker | - -**VersionFailureStrategy** - -| VersionFailureStrategy.REJECT | VersionFailureStrategy.FAIL | -|-|-| -| Abandon execution of the orchestrator, but allow it to be reprocessed later | Fail the orchestration | - -**Strategy examples** - -Scenario 1: You are implementing versioning for the first time in your worker. You want to have a default version for new orchestrations, but do not care about comparing versions with currently running ones. Choose VersionMatchStrategy.NONE, and VersionFailureStrategy does not matter. - -Scenario 2: You are updating an orchestrator's code, and you do not want old orchestrations to continue to be processed on the new code. Bump the default version and the worker version, set VersionMatchStrategy.STRICT and VersionFailureStrategy.FAIL. - -Scenario 3: You are updating an orchestrator's code, and you have ensured the code is version-aware so that it remains backward-compatible with existing orchestrations. Bump the default version and the worker version, and set VersionMatchStrategy.CURRENT_OR_OLDER and VersionFailureStrategy.FAIL. - -Scenario 4: You are performing a high-availability deployment, and your orchestrator code contains breaking changes making it not backward-compatible. Bump the default version and the worker version, and set VersionFailureStrategy.REJECT and VersionMatchStrategy.STRICT. Ensure that at least a few of the previous version of workers remain available to continue processing the older orchestrations - eventually, all older orchestrations _should_ land on the correct workers for processing. Once all remaining old orchestrations have been processed, shut down the remaining old workers. +## Orchestrations + +Orchestrators are implemented using ordinary Python functions that take an `OrchestrationContext` as +their first parameter. The `OrchestrationContext` provides APIs for starting child orchestrations, +scheduling activities, and waiting for external events, among other things. Orchestrations are +fault-tolerant and durable, meaning that they can automatically recover from failures and rebuild +their local execution state. Orchestrator functions must be deterministic, meaning that they must +always produce the same output given the same input. + +### Orchestration versioning + +Orchestrations may be assigned a version when they are first created. If an orchestration is given a +version, it will continually be checked during its lifecycle to ensure that it remains compatible +with the underlying orchestrator code. If the orchestrator code is updated while an orchestration is +running, rules can be set that will define the behavior - whether the orchestration should fail, +abandon for reprocessing at a later time, or attempt to run anyway. For more information, see [The +provided examples](./supported-patterns.md). For more information about versioning in the context of +Durable Functions, see [Orchestration versioning in Durable +Functions](https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-orchestration-versioning) +(Note that concepts specific to Azure Functions, such as host.json settings, do not apply to this +SDK). + +#### Orchestration versioning options + +Both the Durable worker and durable client have versioning configuration available. Because +versioning checks are handled by the worker, the only information the client needs is a +default_version, taken in its constructor, to use as the version for new orchestrations unless +otherwise specified. The worker takes a VersioningOptions object with a `default_version` for new +sub-orchestrations, a `version` used by the worker for orchestration version comparisons, and two +more options giving control over versioning behavior in case of match failures, a +`VersionMatchStrategy` and `VersionFailureStrategy`. + +##### VersionMatchStrategy + +- `VersionMatchStrategy.NONE`: Do not compare orchestration versions. +- `VersionMatchStrategy.STRICT`: Only allow orchestrations with the same version as the worker. +- `VersionMatchStrategy.CURRENT_OR_OLDER`: Allow orchestrations with the same or older + version as the worker. + +##### VersionFailureStrategy + +- `VersionFailureStrategy.REJECT`: Abandon execution of the orchestrator, but allow it to be + reprocessed later. +- `VersionFailureStrategy.FAIL`: Fail the orchestration. + +##### Strategy examples + +Scenario 1: You are implementing versioning for the first time in your worker. You want to have a +default version for new orchestrations, but do not care about comparing versions with currently +running ones. Choose VersionMatchStrategy.NONE, and VersionFailureStrategy does not matter. + +Scenario 2: You are updating an orchestrator's code, and you do not want old orchestrations to +continue to be processed on the new code. Bump the default version and the worker version, set +VersionMatchStrategy.STRICT and VersionFailureStrategy.FAIL. + +Scenario 3: You are updating an orchestrator's code, and you have ensured the code is version-aware +so that it remains backward-compatible with existing orchestrations. Bump the default version and +the worker version, and set VersionMatchStrategy.CURRENT_OR_OLDER and VersionFailureStrategy.FAIL. + +Scenario 4: You are performing a high-availability deployment, and your orchestrator code contains +breaking changes making it not backward-compatible. Bump the default version and the worker version, +and set VersionFailureStrategy.REJECT and VersionMatchStrategy.STRICT. Ensure that at least a few of +the previous version of workers remain available to continue processing the older orchestrations - +eventually, all older orchestrations _should_ land on the correct workers for processing. Once all +remaining old orchestrations have been processed, shut down the remaining old workers. ### Activities -Activities are implemented using ordinary Python functions that take an `ActivityContext` as their first parameter. Activity functions are scheduled by orchestrations and have at-least-once execution guarantees, meaning that they will be executed at least once but may be executed multiple times in the event of a transient failure. Activity functions are where the real "work" of any orchestration is done. +Activities are implemented using ordinary Python functions that take an `ActivityContext` as their +first parameter. Activity functions are scheduled by orchestrations and have at-least-once execution +guarantees, meaning that they will be executed at least once but may be executed multiple times in +the event of a transient failure. Activity functions are where the real "work" of any orchestration +is done. ### Durable timers -Orchestrations can schedule durable timers using the `create_timer` API. These timers are durable, meaning that they will survive orchestrator restarts and will fire even if the orchestrator is not actively in memory. Durable timers can be of any duration, from milliseconds to months. +Orchestrations can schedule durable timers using the `create_timer` API. These timers are durable, +meaning that they will survive orchestrator restarts and will fire even if the orchestrator is not +actively in memory. Durable timers can be of any duration, from milliseconds to months. ### Sub-orchestrations -Orchestrations can start child orchestrations using the `call_sub_orchestrator` API. Child orchestrations are useful for encapsulating complex logic and for breaking up large orchestrations into smaller, more manageable pieces. Sub-orchestrations can also be versioned in a similar manner to their parent orchestrations, however, they do not inherit the parent orchestrator's version. Instead, they will use the default_version defined in the current worker's VersioningOptions unless otherwise specified during `call_sub_orchestrator`. +Orchestrations can start child orchestrations using the `call_sub_orchestrator` API. Child +orchestrations are useful for encapsulating complex logic and for breaking up large orchestrations +into smaller, more manageable pieces. Sub-orchestrations can also be versioned in a similar manner +to their parent orchestrations, however, they do not inherit the parent orchestrator's version. +Instead, they will use the default_version defined in the current worker's VersioningOptions unless +otherwise specified during `call_sub_orchestrator`. ### Entities #### Concepts -Durable Entities provide a way to model small, stateful objects within your orchestration workflows. Each entity has a unique identity and maintains its own state, which is persisted durably. Entities can be interacted with by sending them operations (messages) that mutate or query their state. These operations are processed sequentially, ensuring consistency. Examples of uses for durable entities include counters, accumulators, or any other operation which requires state to persist across orchestrations. +Durable Entities provide a way to model small, stateful objects within your orchestration workflows. +Each entity has a unique identity and maintains its own state, which is persisted durably. Entities +can be interacted with by sending them operations (messages) that mutate or query their state. These +operations are processed sequentially, ensuring consistency. Examples of uses for durable entities +include counters, accumulators, or any other operation which requires state to persist across +orchestrations. -Entities can be invoked from durable clients directly, or from durable orchestrators. They support features like automatic state persistence, concurrency control, and can be locked for exclusive access during critical operations. +Entities can be invoked from durable clients directly, or from durable orchestrators. They support +features like automatic state persistence, concurrency control, and can be locked for exclusive +access during critical operations. -Entities are accessed by a unique ID, implemented here as EntityInstanceId. This ID is comprised of two parts, an entity name referring to the function or class that defines the behavior of the entity, and a key which is any string defined in your code. Each entity instance, represented by a distinct EntityInstanceId, has its own state. +Entities are accessed by a unique ID, implemented here as EntityInstanceId. This ID is comprised of +two parts, an entity name referring to the function or class that defines the behavior of the +entity, and a key which is any string defined in your code. Each entity instance, represented by a +distinct EntityInstanceId, has its own state. #### Syntax @@ -86,11 +139,13 @@ class Counter(entities.DurableEntity): return self.get_state(int, 0) ``` -> Note that the object properties of class-based entities may not be preserved across invocations. Use the derived get_state and set_state methods to access the persisted entity data. +> Note that the object properties of class-based entities may not be preserved across invocations. +Use the derived get_state and set_state methods to access the persisted entity data. ##### Invoking entities -Entities are invoked using the `signal_entity` or `call_entity` APIs. The Durable Client only allows `signal_entity`: +Entities are invoked using the `signal_entity` or `call_entity` APIs. The Durable Client only allows +`signal_entity`: ```python c = DurableTaskSchedulerClient(host_address=endpoint, secure_channel=True, @@ -120,7 +175,9 @@ Entities can perform actions such signaling other entities or starting new orche ##### Locking and concurrency -Because entites can be accessed from multiple running orchestrations at the same time, entities may also be locked by a single orchestrator ensuring exclusive access during the duration of the lock (also known as a critical section). Think semaphores: +Because entites can be accessed from multiple running orchestrations at the same time, entities may +also be locked by a single orchestrator ensuring exclusive access during the duration of the lock +(also known as a critical section). Think semaphores: ```python with (yield ctx.lock_entities([entity_id_1, entity_id_2]): @@ -128,39 +185,55 @@ with (yield ctx.lock_entities([entity_id_1, entity_id_2]): ... ``` -Note that locked entities may not be signalled, and every call to a locked entity must return a result before another call to the same entity may be made from within the critical section. For more details and advanced usage, see the examples and API documentation. +Note that locked entities may not be signalled, and every call to a locked entity must return a +result before another call to the same entity may be made from within the critical section. For more +details and advanced usage, see the examples and API documentation. ##### Deleting entities -Entites are represented as orchestration instances in your Task Hub, and their state is persisted in the Task Hub as well. When using the Durable Task Scheduler as your durability provider, the backend will automatically clean up entities when their state is empty, this is effectively the "delete" operation to save space in the Task Hub. In the DTS Dashboard, "delete entity" simply signals the entity with the "delete" operation. In this SDK, we provide a default implementation for the "delete" operation to clear the state when using class-based entities, which end users are free to override as needed. Users must implement "delete" manually for function-based entities. +Entites are represented as orchestration instances in your Task Hub, and their state is persisted in +the Task Hub as well. When using the Durable Task Scheduler as your durability provider, the backend +will automatically clean up entities when their state is empty, this is effectively the "delete" +operation to save space in the Task Hub. In the DTS Dashboard, "delete entity" simply signals the +entity with the "delete" operation. In this SDK, we provide a default implementation for the +"delete" operation to clear the state when using class-based entities, which end users are free to +override as needed. Users must implement "delete" manually for function-based entities. ### External events -Orchestrations can wait for external events using the `wait_for_external_event` API. External events are useful for implementing human interaction patterns, such as waiting for a user to approve an order before continuing. +Orchestrations can wait for external events using the `wait_for_external_event` API. External events +are useful for implementing human interaction patterns, such as waiting for a user to approve an +order before continuing. ### Continue-as-new -Orchestrations can be continued as new using the `continue_as_new` API. This API allows an orchestration to restart itself from scratch, optionally with a new input. +Orchestrations can be continued as new using the `continue_as_new` API. This API allows an +orchestration to restart itself from scratch, optionally with a new input. ### Suspend, resume, and terminate -Orchestrations can be suspended using the `suspend_orchestration` client API and will remain suspended until resumed using the `resume_orchestration` client API. A suspended orchestration will stop processing new events, but will continue to buffer any that happen to arrive until resumed, ensuring that no data is lost. An orchestration can also be terminated using the `terminate_orchestration` client API. Terminated orchestrations will stop processing new events and will discard any buffered events. +Orchestrations can be suspended using the `suspend_orchestration` client API and will remain +suspended until resumed using the `resume_orchestration` client API. A suspended orchestration will +stop processing new events, but will continue to buffer any that happen to arrive until resumed, +ensuring that no data is lost. An orchestration can also be terminated using the +`terminate_orchestration` client API. Terminated orchestrations will stop processing new events and +will discard any buffered events. ### Retry policies -Orchestrations can specify retry policies for activities and sub-orchestrations. These policies control how many times and how frequently an activity or sub-orchestration will be retried in the event of a transient error. +Orchestrations can specify retry policies for activities and sub-orchestrations. These policies +control how many times and how frequently an activity or sub-orchestration will be retried in the +event of a transient error. ### Large payload externalization -Orchestration inputs, outputs, and event data are transmitted through -gRPC messages. When these payloads become very large they can exceed -gRPC message size limits or degrade performance. Large payload -externalization solves this by transparently offloading oversized -payloads to an external store (such as Azure Blob Storage) and -replacing them with compact reference tokens in the gRPC messages. +Orchestration inputs, outputs, and event data are transmitted through gRPC messages. When these +payloads become very large they can exceed gRPC message size limits or degrade performance. Large +payload externalization solves this by transparently offloading oversized payloads to an external +store (such as Azure Blob Storage) and replacing them with compact reference tokens in the gRPC +messages. -This feature is **opt-in** and requires installing an optional -dependency: +This feature is **opt-in** and requires installing an optional dependency: ```bash pip install durabletask[azure-blob-payloads] @@ -177,13 +250,13 @@ pip install durabletask[azure-blob-payloads] 3. When the worker or client receives a message containing a token, it downloads and decompresses the original payload automatically. -This process is fully transparent to orchestrator and activity code — -no changes are needed in your workflow logic. +This process is fully transparent to orchestrator and activity code — no changes are needed in your +workflow logic. #### Configuring the blob payload store -The built-in `BlobPayloadStore` uses Azure Blob Storage. Create a -store instance and pass it to both the worker and client: +The built-in `BlobPayloadStore` uses Azure Blob Storage. Create a store instance and pass it to both +the worker and client: ```python from durabletask.extensions.azure_blob_payloads import BlobPayloadStore, BlobPayloadStoreOptions @@ -219,8 +292,8 @@ with DurableTaskSchedulerWorker( ) ``` -You can also authenticate using `account_url` and a -`TokenCredential` instead of a connection string: +You can also authenticate using `account_url` and a `TokenCredential` instead of a connection +string: ```python from azure.identity import DefaultAzureCredential @@ -245,18 +318,15 @@ store = BlobPayloadStore(BlobPayloadStoreOptions( #### Cross-SDK compatibility -The blob token format (`blob:v1::`) is -compatible with the .NET Durable Task SDK, enabling -interoperability between Python and .NET workers sharing the same -task hub and storage account. Note that message serialization strategies -may differ for complex objects and custom types. +The blob token format (`blob:v1::`) is compatible with the .NET Durable Task +SDK, enabling interoperability between Python and .NET workers sharing the same task hub and storage +account. Note that message serialization strategies may differ for complex objects and custom types. #### Custom payload stores -You can implement a custom payload store by subclassing -`PayloadStore` from `durabletask.payload` and implementing -the `upload`, `upload_async`, `download`, `download_async`, and -`is_known_token` methods: +You can implement a custom payload store by subclassing `PayloadStore` from `durabletask.payload` +and implementing the `upload`, `upload_async`, `download`, `download_async`, and `is_known_token` +methods: ```python from typing import Optional @@ -277,7 +347,12 @@ class MyPayloadStore(PayloadStore): # Store data and return a unique token string ... - async def upload_async(self, data: bytes, *, instance_id: Optional[str] = None) -> str: + async def upload_async( + self, + data: bytes, + *, + instance_id: Optional[str] = None, + ) -> str: ... def download(self, token: str) -> bytes: @@ -292,42 +367,49 @@ class MyPayloadStore(PayloadStore): ... ``` -See the [large payload example](../examples/large_payload/) for a -complete working sample. +See the [large payload example](../examples/large_payload/) for a complete working sample. ### Logging configuration -Both the TaskHubGrpcWorker and TaskHubGrpcClient (as well as DurableTaskSchedulerWorker and DurableTaskSchedulerClient for durabletask-azuremanaged) accept a log_handler and log_formatter object from `logging`. These can be used to customize verbosity, output location, and format of logs emitted by these sources. +Both the TaskHubGrpcWorker and TaskHubGrpcClient (as well as DurableTaskSchedulerWorker and +DurableTaskSchedulerClient for durabletask-azuremanaged) accept a log_handler and log_formatter +object from `logging`. These can be used to customize verbosity, output location, and format of logs +emitted by these sources. -For example, to output logs to a file called `worker.log` at level `DEBUG`, the following syntax might apply: +For example, to output logs to a file called `worker.log` at level `DEBUG`, the following syntax +might apply: ```python log_handler = logging.FileHandler('durable.log', encoding='utf-8') log_handler.setLevel(logging.DEBUG) -with DurableTaskSchedulerWorker(host_address=endpoint, secure_channel=secure_channel, - taskhub=taskhub_name, token_credential=credential, log_handler=log_handler) as w: +with DurableTaskSchedulerWorker( + host_address=endpoint, + secure_channel=secure_channel, + taskhub=taskhub_name, + token_credential=credential, + log_handler=log_handler, +) as w: ``` > [!NOTE] -> The worker and client output many logs at the `DEBUG` level that will be useful when understanding orchestration flow and diagnosing issues with Durable applications. Before submitting issues, please attempt a repro of the issue with debug logging enabled. +> The worker and client output many logs at the `DEBUG` level that will be useful when understanding +orchestration flow and diagnosing issues with Durable applications. Before submitting issues, please +attempt a repro of the issue with debug logging enabled. ### Work item filtering -By default a worker receives **all** work items from the backend, -regardless of which orchestrations, activities, or entities are -registered. Work item filtering lets you explicitly tell the backend -which work items a worker can handle so that only matching items are -dispatched. This is useful when running multiple specialized workers -against the same task hub. +By default a worker receives **all** work items from the backend, regardless of which +orchestrations, activities, or entities are registered. Work item filtering lets you explicitly tell +the backend which work items a worker can handle so that only matching items are dispatched. This is +useful when running multiple specialized workers against the same task hub. -Work item filtering is **opt-in**. Call `use_work_item_filters()` on -the worker before starting it. +Work item filtering is **opt-in**. Call `use_work_item_filters()` on the worker before starting it. #### Auto-generated filters -Calling `use_work_item_filters()` with no arguments builds filters -automatically from the worker's registry at start time: +Calling `use_work_item_filters()` with no arguments builds filters automatically from the worker's +registry at start time: ```python with DurableTaskSchedulerWorker(...) as w: @@ -337,9 +419,8 @@ with DurableTaskSchedulerWorker(...) as w: w.start() ``` -When versioning is configured with `VersionMatchStrategy.STRICT`, -the worker's version is included in every filter so the backend -only dispatches work items that match that exact version. +When versioning is configured with `VersionMatchStrategy.STRICT`, the worker's version is included +in every filter so the backend only dispatches work items that match that exact version. #### Explicit filters @@ -368,12 +449,11 @@ w.use_work_item_filters(WorkItemFilters( #### Clearing filters -Pass `None` to clear any previously configured filters and return -to the default behaviour of processing all work items: +Pass `None` to clear any previously configured filters and return to the default behaviour of +processing all work items: ```python w.use_work_item_filters(None) ``` -See the full -[work item filtering sample](../examples/work_item_filtering.py). +See the full [work item filtering sample](../examples/work_item_filtering.py). diff --git a/docs/getting-started.md b/docs/getting-started.md index 4f31c223..a92752de 100644 --- a/docs/getting-started.md +++ b/docs/getting-started.md @@ -1,9 +1,12 @@ # Getting Started -### Run the Order Processing Example -- Check out the [Durable Task Scheduler example](../examples/dts/sub-orchestrations-with-fan-out-fan-in/README.md) +## Run the Order Processing Example + +- Check out the [Durable Task Scheduler + example](../examples/dts/sub-orchestrations-with-fan-out-fan-in/README.md) for detailed instructions on running the order processing example. ### Explore Other Samples -- Visit the [examples](../examples/dts/) directory to find a variety of sample orchestrations and learn how to run them. +- Visit the [examples](../examples/dts/) directory to find a variety of sample orchestrations and + learn how to run them. diff --git a/docs/supported-patterns.md b/docs/supported-patterns.md index 7f768611..46923790 100644 --- a/docs/supported-patterns.md +++ b/docs/supported-patterns.md @@ -2,7 +2,7 @@ The following orchestration patterns are currently supported. -### Function chaining +## Function chaining An orchestration can chain a sequence of function calls using the following syntax: @@ -24,7 +24,8 @@ See the full [function chaining example](../examples/activity_sequence.py). ### Fan-out/fan-in -An orchestration can fan-out a dynamic number of function calls in parallel and then fan-in the results using the following syntax: +An orchestration can fan-out a dynamic number of function calls in parallel and then fan-in the +results using the following syntax: ```python # activity function for getting the list of work items @@ -52,7 +53,9 @@ See the full [fan-out sample](../examples/fanout_fanin.py). ### Human interaction and durable timers -An orchestration can wait for a user-defined event, such as a human approval event, before proceding to the next step. In addition, the orchestration can create a timer with an arbitrary duration that triggers some alternate action if the external event hasn't been received: +An orchestration can wait for a user-defined event, such as a human approval event, before proceding +to the next step. In addition, the orchestration can create a timer with an arbitrary duration that +triggers some alternate action if the external event hasn't been received: ```python def purchase_order_workflow(ctx: task.OrchestrationContext, order: Order): @@ -77,23 +80,32 @@ def purchase_order_workflow(ctx: task.OrchestrationContext, order: Order): return f"Approved by '{approval_details.approver}'" ``` -As an aside, you'll also notice that the example orchestration above works with custom business objects. Support for custom business objects includes support for custom classes, custom data classes, and named tuples. Serialization and deserialization of these objects is handled automatically by the SDK. +As an aside, you'll also notice that the example orchestration above works with custom business +objects. Support for custom business objects includes support for custom classes, custom data +classes, and named tuples. Serialization and deserialization of these objects is handled +automatically by the SDK. See the full [human interaction sample](../examples/human_interaction.py). ### Version-aware orchestrator -When utilizing orchestration versioning, it is possible for an orchestrator to remain backwards-compatible with orchestrations created using the previously defined version. For instance, consider an orchestration defined with the following signature: +When utilizing orchestration versioning, it is possible for an orchestrator to remain +backwards-compatible with orchestrations created using the previously defined version. For instance, +consider an orchestration defined with the following signature: ```python def my_orchestrator(ctx: task.OrchestrationContext, order: Order): """Dummy orchestrator function illustrating old logic""" yield ctx.call_activity(activity_one) - yield ctx.call_activity(activity_two) + yield ctx.call_activity(activity_two) return "Success" ``` -Assume that any orchestrations created using this orchestrator were versioned 1.0.0. If the signature of this method needs to be updated to call activity_three between the calls to activity_one and activity_two, ordinarily this would break any running orchestrations at the time of deployment. However, the following orchestrator will be able to process both orchestraions versioned 1.0.0 and 2.0.0 after the change: +Assume that any orchestrations created using this orchestrator were versioned 1.0.0. If the +signature of this method needs to be updated to call activity_three between the calls to +activity_one and activity_two, ordinarily this would break any running orchestrations at the time of +deployment. However, the following orchestrator will be able to process both orchestraions versioned +1.0.0 and 2.0.0 after the change: ```python def my_orchestrator(ctx: task.OrchestrationContext, order: Order): @@ -101,7 +113,7 @@ def my_orchestrator(ctx: task.OrchestrationContext, order: Order): yield ctx.call_activity(activity_one) if ctx.version > '1.0.0': yield ctx.call_activity(activity_three) - yield ctx.call_activity(activity_two) + yield ctx.call_activity(activity_two) ``` Alternatively, if the orchestrator changes completely, the following syntax might be preferred: @@ -114,22 +126,20 @@ def my_orchestrator(ctx: task.OrchestrationContext, order: Order): return "Success yield ctx.call_activity(activity_one) yield ctx.call_activity(activity_three) - yield ctx.call_activity(activity_two) - return "Success" + yield ctx.call_activity(activity_two) + return "Success" ``` See the full [version-aware orchestrator sample](../examples/version_aware_orchestrator.py) ### Work item filtering -When running multiple workers against the same task hub, each -worker can declare which work items it handles. The backend then -dispatches only the matching orchestrations, activities, and -entities, avoiding unnecessary round-trips. Filtering is opt-in -and supports both auto-generated and explicit filter sets. +When running multiple workers against the same task hub, each worker can declare which work items it +handles. The backend then dispatches only the matching orchestrations, activities, and entities, +avoiding unnecessary round-trips. Filtering is opt-in and supports both auto-generated and explicit +filter sets. -The simplest approach auto-generates filters from the worker's -registry: +The simplest approach auto-generates filters from the worker's registry: ```python with DurableTaskSchedulerWorker(...) as w: @@ -139,8 +149,7 @@ with DurableTaskSchedulerWorker(...) as w: w.start() ``` -For more control you can provide explicit filters, including -version constraints: +For more control you can provide explicit filters, including version constraints: ```python from durabletask.worker import ( @@ -162,20 +171,16 @@ w.use_work_item_filters(WorkItemFilters( )) ``` -See the full -[work item filtering sample](../examples/work_item_filtering.py). +See the full [work item filtering sample](../examples/work_item_filtering.py). ### Large payload externalization -When orchestrations work with very large inputs, outputs, or event -data, the payloads can exceed gRPC message size limits. The large -payload externalization pattern transparently offloads these payloads -to Azure Blob Storage and replaces them with compact reference tokens -in the gRPC messages. +When orchestrations work with very large inputs, outputs, or event data, the payloads can exceed +gRPC message size limits. The large payload externalization pattern transparently offloads these +payloads to Azure Blob Storage and replaces them with compact reference tokens in the gRPC messages. -No changes are required in orchestrator or activity code. Simply -install the optional dependency and configure a payload store on the -worker and client: +No changes are required in orchestrator or activity code. Simply install the optional dependency and +configure a payload store on the worker and client: ```python from durabletask.extensions.azure_blob_payloads import BlobPayloadStore, BlobPayloadStoreOptions @@ -209,11 +214,9 @@ with DurableTaskSchedulerWorker( state = c.wait_for_orchestration_completion(instance_id, timeout=60) ``` -In this example, any payload exceeding the threshold (default 900 KB) -is compressed and uploaded to the configured Azure Blob container. -When the worker or client reads the message, it downloads and +In this example, any payload exceeding the threshold (default 900 KB) is compressed and uploaded to +the configured Azure Blob container. When the worker or client reads the message, it downloads and decompresses the payload automatically. -See the full [large payload example](../examples/large_payload/) and -[feature documentation](./features.md#large-payload-externalization) -for configuration options and details. +See the full [large payload example](../examples/large_payload/) and [feature +documentation](./features.md#large-payload-externalization) for configuration options and details. diff --git a/durabletask-azuremanaged/CHANGELOG.md b/durabletask-azuremanaged/CHANGELOG.md index f24f6a72..79766874 100644 --- a/durabletask-azuremanaged/CHANGELOG.md +++ b/durabletask-azuremanaged/CHANGELOG.md @@ -2,8 +2,8 @@ All notable changes to this project will be documented in this file. -The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), -and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project +adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). ## Unreleased @@ -56,7 +56,8 @@ CHANGED: ## v0.3.1 - Updates base dependency to durabletask v0.4.1 - - Fixed an issue where orchestrations would still throw non-determinism errors even when versioning logic should have prevented it + - Fixed an issue where orchestrations would still throw non-determinism errors, + even when versioning logic should have prevented it ## v0.3.0 diff --git a/examples/sub-orchestrations-with-fan-out-fan-in/README.md b/examples/sub-orchestrations-with-fan-out-fan-in/README.md index 8e73e784..93ad3474 100644 --- a/examples/sub-orchestrations-with-fan-out-fan-in/README.md +++ b/examples/sub-orchestrations-with-fan-out-fan-in/README.md @@ -1,85 +1,118 @@ # Portable SDK Sample for Sub Orchestrations and Fan-out / Fan-in -This sample demonstrates how to use the Durable Task SDK, also known as the Portable SDK, with the Durable Task Scheduler to create orchestrations. These orchestrations not only spin off child orchestrations but also perform parallel processing by leveraging the fan-out/fan-in application pattern. +This sample demonstrates how to use the Durable Task SDK, also known as the Portable SDK, with the +Durable Task Scheduler to create orchestrations. These orchestrations not only spin off child +orchestrations but also perform parallel processing by leveraging the fan-out/fan-in application +pattern. -The scenario showcases an order processing system where orders are processed in batches. +The scenario showcases an order processing system where orders are processed in batches. -> Note, for simplicity, this code is contained within a single source file. In real practice, you would have +> Note, for simplicity, this code is contained within a single source file. In real practice, you +would have + +## Prerequisites -# Prerequisites If using a deployed Durable Task Scheduler: - - [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli) - - [`az durabletask` CLI extension](https://learn.microsoft.com/en-us/cli/azure/durabletask?view=azure-cli-latest) + +- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli) +- [`az durabletask` CLI + extension](https://learn.microsoft.com/en-us/cli/azure/durabletask?view=azure-cli-latest) ## Running the Examples + There are two separate ways to run an example: - Using the Emulator (recommended for learning and development) -- Using a deployed Scheduler and Taskhub in Azure +- Using a deployed Scheduler and Taskhub in Azure ### Running with the Emulator -We recommend using the emulator for learning and development as it's faster to set up and doesn't require any Azure resources. The emulator simulates a scheduler and taskhub, packaged into an easy-to-use Docker container. + +We recommend using the emulator for learning and development as it's faster to set up and doesn't +require any Azure resources. The emulator simulates a scheduler and taskhub, packaged into an +easy-to-use Docker container. 1. Install Docker: If it is not already installed. 2. Pull the Docker Image for the Emulator: + ```bash docker pull mcr.microsoft.com/dts/dts-emulator:v0.0.6 ``` -3. Run the Emulator: Wait a few seconds for the container to be ready. +1. Run the Emulator: Wait a few seconds for the container to be ready. + ```bash docker run --name dtsemulator -d -p 8080:8080 mcr.microsoft.com/dts/dts-emulator:v0.0.6 ``` -4. Install the Required Packages: +1. Install the Required Packages: + ```bash pip install -r requirements.txt ``` -Note: The example code has been updated to use the default emulator settings automatically (endpoint: http://localhost:8080, taskhub: default). You don't need to set any environment variables. +Note: The example code has been updated to use the default emulator settings automatically +(endpoint: [http://localhost:8080](http://localhost:8080), taskhub: default). You don't need to set +any environment variables. ### Running with a Deployed Scheduler and Taskhub Resource in Azure -For production scenarios or when you're ready to deploy to Azure, you can create a taskhub using the Azure CLI: + +For production scenarios or when you're ready to deploy to Azure, you can create a taskhub using the +Azure CLI: 1. Create a Scheduler: + ```bash -az durabletask scheduler create --resource-group --name --location --ip-allowlist "[0.0.0.0/0]" --sku-capacity 1 --sku-name "Dedicated" --tags "{'myattribute':'myvalue'}" +az durabletask scheduler create --resource-group \ + --name \ + --location --ip-allowlist "[0.0.0.0/0]" --sku-capacity 1 \ + --sku-name "Dedicated" --tags "{'myattribute':'myvalue'}" ``` -2. Create Your Taskhub: +1. Create Your Taskhub: + ```bash -az durabletask taskhub create --resource-group --scheduler-name --name +az durabletask taskhub create --resource-group \ + --scheduler-name --name ``` -3. Retrieve the Endpoint for the Scheduler: Locate the taskhub in the Azure portal to find the endpoint. +1. Retrieve the Endpoint for the Scheduler: Locate the taskhub in the Azure portal to find the + endpoint. -4. Set the Environment Variables: +2. Set the Environment Variables: Bash: + ```bash export TASKHUB= export ENDPOINT= ``` + Powershell: + ```powershell $env:TASKHUB = "" $env:ENDPOINT = "" ``` -5. Install the Required Packages: +1. Install the Required Packages: + ```bash pip install -r requirements.txt ``` -### Running the Examples +### Running the Python Components + You can now execute the sample using Python: Start the worker and ensure the TASKHUB and ENDPOINT environment variables are set in your shell: -```bash + +```bash python3 ./worker.py ``` -Next, start the orchestrator and make sure the TASKHUB and ENDPOINT environment variables are set in your shell: +Next, start the orchestrator and make sure the TASKHUB and ENDPOINT environment variables are set in +your shell: + ```bash python3 ./orchestrator.py ``` @@ -87,8 +120,11 @@ python3 ./orchestrator.py You should start seeing logs for processing orders in both shell outputs. ### Review Orchestration History and Status in the Durable Task Scheduler Dashboard + To access the Durable Task Scheduler Dashboard, follow these steps: -- **Using the Emulator**: By default, the dashboard runs on portal 8082. Navigate to http://localhost:8082 and click on the default task hub. +- **Using the Emulator**: By default, the dashboard runs on portal 8082. + Navigate to [http://localhost:8082](http://localhost:8082) and click on the default task hub. -- **Using a Deployed Scheduler**: Navigate to the Scheduler resource. Then, go to the Task Hub subresource that you are using and click on the dashboard URL in the top right corner. +- **Using a Deployed Scheduler**: Navigate to the Scheduler resource. Then, go to the Task Hub + subresource that you are using and click on the dashboard URL in the top right corner.