Runtime Tasking
Taskflow allows you to interact with the scheduling runtime from the execution context of a runtime task. Runtime tasking is mostly used for designing specialized parallel algorithms that go beyond the default scheduling rules of taskflows.
Create a Runtime Task
A runtime task is a callable that takes a reference to a tf::
tf::Task A, B, C, D; std::tie(A, B, C, D) = taskflow.emplace( [] () { return 0; }, [&C] (tf::Runtime& rt) { // C must be captured by reference std::cout << "B\n"; rt.schedule(C); }, [] () { std::cout << "C\n"; }, [] () { std::cout << "D\n"; } ); A.precede(B, C, D); executor.run(taskflow).wait();
When the condition task A
completes and returns 0
, the scheduler moves on to the runtime task B
. Under the normal circumstance, tasks C
and D
will not run because their conditional dependencies never happen. This can be broken by forcefully scheduling C
or/and D
via a runtime task that resides in the same graph. Here, the runtime task B
call tf::C
even though the weak dependency between A
and C
will never happen based on the graph structure itself. As a result, we will see both B
and C
in the output:
B # B is a runtime task to schedule C out of its dependency constraint C
Acquire the Running Executor
You can acquire the reference to the running executor using tf::
tf::Executor executor; tf::Taskflow taskflow; taskflow.emplace([&](tf::Runtime& rt){ assert(&(rt.executor()) == &executor); }); executor.run(taskflow).wait();
Run a Task Graph Synchronously
A runtime task can spawn and run a task graph synchronously using tf::
taskflow.emplace([](tf::Runtime& rt){ rt.run_and_wait([](tf::Subflow& sf){ sf.emplace([](){ std::cout << "independent task 1\n"; }); sf.emplace([](){ std::cout << "independent task 2\n"; }); }); // subflow joins upon run_and_wait returns });
You can also create a task graph yourself and execute it through a runtime task. This organization avoids repetitive creation of a subflow with the same topology, such as running a runtime task repetitively. The following code performs the same execution logic as the above example but using the given task graph to avoid repetitive creations of a subflow:
// create a custom graph tf::Taskflow graph; graph.emplace([](){ std::cout << "independent task 1\n"; }); graph.emplace([](){ std::cout << "independent task 2\n"; }); taskflow.emplace([&](tf::Runtime& rt){ rt.run_and_wait(graph); // this worker thread continues the work-stealing loop }); executor.run_n(taskflow, 10000);
Although tf::executor.run(taskflow).wait()
) which blocks the caller thread until the submitted taskflow completes. When multiple submitted taskflows are being waited, their executions can potentially lead to deadlock. Using tf::