template<class T>
class genesis::utils::ProactiveFuture< T >
Wrapper around std::future
that implements (pro-)active waiting, i.e., work stealing.
This has the same interface and functionality as std::future
, with the key difference that when calling wait(), tasks from the ThreadPool queue are processed while waiting. This avoids the pool to deadlock should tasks submit tasks of their own that they are then waiting for. In such a scenario, all threads in the pool could be waiting for their submitted tasks, but none of them can run, because all the threads are already processing a task (the one that is stuck waiting).
The technique is inspired by "C++ Concurrency in Action" * book by Anthony Williams, second edition, chapter 9, where this idea is mentioned as a way to avoid starving tasks. We here wrap this idea into a class, so that users of the ThreadPool have to use this feature, and hence avoid the deadlock.
Definition at line 79 of file thread_pool.hpp.
|
| ProactiveFuture () noexcept=default |
| Public default constructor, so that for instance a std::vector of ProactiveFuture can be created. More...
|
|
| ProactiveFuture (const ProactiveFuture &)=delete |
|
| ProactiveFuture (ProactiveFuture &&) noexcept=default |
|
| ~ProactiveFuture () noexcept=default |
|
bool | deferred () const |
| Check if the future is deferred, i.e., the result will be computed only when explicitly requested. More...
|
|
T | get () |
| Return the result, after calling wait(). More...
|
|
template<typename U = T> |
std::enable_if<!std::is_void< U >::value, U & >::type | get () |
| Return the result, after calling wait(). More...
|
|
template<typename U = T> |
std::enable_if< std::is_void< U >::value >::type | get () |
| Return the result, after calling wait(). More...
|
|
ProactiveFuture & | operator= (const ProactiveFuture &)=delete |
|
ProactiveFuture & | operator= (ProactiveFuture &&) noexcept=default |
|
bool | ready () const |
| Check if the future is ready. More...
|
|
bool | valid () const noexcept |
| Check if the future has a shared state. More...
|
|
void | wait () const |
| Wait for the result to become available. More...
|
|
template<class Rep , class Period > |
std::future_status | wait_for (std::chrono::duration< Rep, Period > const &timeout_duration) const |
| Wait for the result, return if it is not available for the specified timeout duration. More...
|
|
template<class Clock , class Duration > |
std::future_status | wait_until (std::chrono::time_point< Clock, Duration > const &timeout_time) const |
| Wait for the result, return if it is not available until specified time point has been reached. More...
|
|
std::future_status wait_for |
( |
std::chrono::duration< Rep, Period > const & |
timeout_duration | ) |
const |
|
inline |
Wait for the result, return if it is not available for the specified timeout duration.
This simply forwards to the wait_for
function of the future. Note that this does not do the busy waiting that this wrapper is intended for. Hence, calling this function in a loop until the future is ready might never happen, in case that the ThreadPool dead locks due to the task waiting for a (then) starving other task. The whole idea of this class is to avoid this scenario, by processing these potentially starving tasks. We hence recommend to not use this function, or at least not in a loop, unless you are sure that none of your tasks submit any tasks of their own to the same thread pool.
Definition at line 188 of file thread_pool.hpp.