Producer-Consumer Pattern Implementations: AAL v STL

Producers and Consumers

I have come across a few good examples of the Producer-Consumer pattern, however they tend to use contrived examples that don’t correlate well to a real world example.  I realise that the examples are generally contrived for the sake of brevity and in order to keep them clear and concise.  I admit that the example code that I have produced for this blog post is also contrived, nonetheless I have tried to make it resemble a real world implementation of the pattern.  In order to achieve this I have ensured that the concurrent elements of both the Producer and Consumer aren’t completely self contained (i.e. a loop producing a fixed amount of elements to be consumed).

In this blog post I will compare two implementations of the Producer-Consumer pattern in C++; one using the Standard Template Library (STL) and the other using the Asynchronous Agents Library (AAL) from the Concurrency Runtime.   The processes compress and decompress strings.  I used a run-length encoding implementation from another project for the compression.  The Producer compresses the strings and the Consumer decompresses the strings.  The user will require a solid understanding of both C++ and Concurrency/Multithreading.

Producer-Consumer Pattern

Wikipedia has the best description that I can find for the Producer-Consumer pattern.  Essentially it describes the solution to the requirement for synchronising the interaction between two processes.  One process is producing items for another process to consume.  The communication between the processes is achieved through a shared buffer, where the producer populates the buffer and the consumer removes items from the buffer.  This communication needs to be controlled to ensure that only one process at a time will modify the buffer in order to maintain its integrity.  In the example code the IProducer and IConsumer interfaces set the contract for both types of Producer and Consumer to fulfil.

AAL

The Asynchronous Agents Library is an actor-based programming model which uses message passing for communicating between actors.  It uses the scheduling and resource management of the Concurrency Runtime (ConcRT).  It provides coarse-grained control over the concurrency of an application.  That is to say that you control the requests to produce and consume data, and the ConcRT schedules these tasks for you.  This method of communication through message passing is an alternative synchronisation approach compared to the use of synchronisation primitives like a mutex or critical section.  The main selling point of the ConcRT is that it provides the concurrent infrastructure for your application by automatically distributing the workload across the available CPUs.  This frees us developers to concentrate on developing applications without having to worry too much about concurrency.  The two decisions that are required when using AAL include: choosing the type of buffer to use as the message block, and choosing the message passing functions to use.

Asynchronous Message Blocks

The thread-safe message passing is facilitated through the use of message blocks.  Each of these message blocks implements the Concurrency::ISource and Concurrency::ITarget interfaces.  They represent the endpoints of the messaging network.  The source endpoint receives messages from a sender.  The target endpoint sends messages to a receiver.  There are nine message blocks in total however I only used two.  I used an unbounded_buffer for communicating between the Producer and Consumer.  The number of sources and targets are unlimited for this type of buffer.  Another benefit is that the order of the messages is maintained between sending and receiving.  I also used the single_assignment message block in the Producer in order to signal the agent to finish production.  I used Concurrency::concurrent_queue, a thread-safe queue, from the selection of concurrent containers in the Parallel Patterns Library for storing the data to be sent by the Producer and the data received by Consumers:  This effectively behaves the same as a std::queue, but is thread-safe.

Message Passing Functions

The thread-safe messages are passed using three specific functions.  Concurrency::asend(…) is used to send messages asynchronously by the Producer.  The non-blocking Concurrency::try_receive(…) is used by the executing Producer agent in order to check whether it should stop or not.  Finally, the blocking Concurrency::receive(…) is used by the executing Consumer to receive messages sent by the Producer.

The Consumer agents blocks once started, waiting to receive data.  The blocking receive(…) function accepts a timeout parameter, the value of which can be set through the overloaded consume method (defaults to infinite).  This specifies a delay  before the receive task times out.  The Consumer agent also stops executing once it receives a particular message from the Producer:

STL

In order to implement the pattern using STL I had to manually manage the threads for the Producer and Consumer.  Running them on separate threads allows them to run concurrently and independently of each other.  Also because of the multithreaded nature of the application I required a thread-safe container.

Concurrent Queue

I achieved this by wrapping the std::queue container adaptor in a class that used a std::mutex for synchronising access to the container:

I ended up with something similar to Just Software Solutions and Juan Chopanza apart from the use of the std::condition_variable.  I was continually polling the internal queue in the pop() method, increasing the contention on the mutex by locking it each time.  Following their recommendation and incorporating the std::condition_variable means that I can wait to be notified of the queue being populated:

Multithreading the Producer & Consumer

In order to manage the threading infrastructure for the Producer and Consumer, I followed Mario Konrad’s PThread tutorial using std::thread for creating the individual threads.  The Thread class was used as the base class for both the Producer & Consumer.  Each provides their own implementation of the abstract run() method:

Windows Performance Toolkit

In order to get a visual representation of what the executing code was doing I decided to carry out a performance capture using the Windows Performance Recorder (WPR) from the Windows Performance Toolkit.  I was then able to analyse the captured trace log in the Windows Performance Analyser (WPA).  Looking at the capture involving the STL Producer and Consumer we can see the two threads that we create in STL for the Producer and Consumer to run on, by viewing the Threads graph:

STL WPA Capture

However if we analyse the capture involving the AAL Producer and Consumer, we can see that there are four threads that were executing during the trace.  This captures were taken on my Surface Pro 3 that has two cores which can service two threads each.  This is the ‘proof in the pudding’ for the AAL.  The ConcRT automatically distributed the tasks across the four available threads:

AAL WPA Capture

Summary

Hopefully you will find that this example implementation of the Producer-Consumer pattern simulates real world usage.  At the very least it should serve as another contrived example of the Producer-Consumer pattern.  The primary take-away that I have from writing this blog post and the associated code is that the ConcRT is very useful.  The fact that it automatically apportions work to the available CPUs means that you can write scalable code very easily.  Also because of the fact that we as developers don’t have to worry about the concurrent infrastructure, there are less things for us to do wrong and as such our code should be more reliable.

Source Code

The source code associated with this blog post is split into three projects.  The ConcRT project produces a dll containing the logic.  The ConcRTTest project contains all of the unit tests.  The ConcRTApp project produces a console application to execute the code:

ConcRT Console Application

They can all be found on Bitbucket

Tagged , , , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *