Message-passing asynchronous worker classes. More...
Classes | |
class | dispatch_t |
Describes a workload that does not need to return any results, that will be dispatched as a heap-allocated object. More... | |
class | static_worker_t |
Describes a workload that returns results to be consumed and processed by the caller, that will be dispatched as a local object. More... | |
class | worker_t |
Describes a workload that returns results to be consumed and processed by the caller, that will be dispatched as a heap- allocated object. More... | |
Functions | |
bool | GetResults () |
Retrieve all currently pending results. | |
size_t | PendingResults () |
Get a count of how many results are pending. | |
void | Queue (const class workload_t *const instance) |
Dispatch a worker instance for processing and deletion by a worker thread. |
Message-passing asynchronous worker classes.
Overview
The Async Workers library is a small collection of classes that leverage ZeroMQ (http://www.zeromq.org/) and the OpenThreads(*) library (http://openthreads.sourceforge.net/) to provide a convenient means of implementing offloading of asynchronous workloads.
(* OpenThreads is used for the creation and destruction of threads. If OpenThreads is not available for your platform or not an option for your project, it will be a trivial few-minutes work to make the WorkerPool use your native API)
It can also be used to implement a very efficient, Erlang- style messaging-based form of parallelism, similar to the "pragma task" feature of OpenMP 3.0.
Async Workers creates a pool of worker threads, one for each available CPU core. Each thread calls zmq_recv(), and then waits in an OS-scheduler-friendly IO wait for a message to arrive.
Work is delivered to a thread as a pointer to an instance of an object derived from one of the base classes, eliminating copying and allowing data to remain "hot" in CPU cache if a thread is available to process it immediately.
Using Async Workers
Encpsulate your work load in one of the following base clases or see async::workload_t for the fundamental base class if you need to derive something different.
Implement the "Work()" function with the workload you want doing; Use the Queue() function to dispatch the work to a worker thread.
By default workers use the delete method on themselves once used, to override this default behavior, e.g. for a local stack instance, overload the Destroy() member function.
Retrieving results
To collect results from workers or re-use the objects, use the async::worker_t or async::static_worker_t class as your base, and implement both the Work() and Result() virtual functions.
When the work has been executed, the worker thread will send the object back where they will be retrieved when you call async::GetResults().
Base classes
Notes and caveats
bool async::GetResults | ( | ) |
Retrieve all currently pending results.
Blocks the caller until all work loads that were marked to return results have completed and the results have been retrieved.
size_t async::PendingResults | ( | ) |
Get a count of how many results are pending.
void async::Queue | ( | const class workload_t *const | instance | ) |
Dispatch a worker instance for processing and deletion by a worker thread.
workload_t::Queue(new MyWorker(...)) is the preferred method for dispatching background work.
[in] | instance | Pointer to the entity to process. |