c thread safe queue

The .NET Framework 4 introduces the System.Collections.Concurrent namespace, which includes several collection classes that are both thread-safe and scalable. Multiple threads can safely and efficiently add or remove items from these collections, without requiring additional synchronization in user code. When you write new code, use the concurrent collection classes whenever multiple threads will write to the collection concurrently. If you are only reading from a shared collection, then you can use the classes in the System.Collections.Generic namespace. We recommend that you do not use 1.0 collection classes unless you are required to target the .NET Framework 1.1 or earlier runtime.

Thread Synchronization in the .NET Framework 1.0 and 2.0 Collections

The collections introduced in the .NET Framework 1.0 are found in the System.Collections namespace. These collections, which include the commonly used ArrayList and Hashtable, provide some thread-safety through the Synchronized property, which returns a thread-safe wrapper around the collection. The wrapper works by locking the entire collection on every add or remove operation. Therefore, each thread that is attempting to access the collection must wait for its turn to take the one lock. This is not scalable and can cause significant performance degradation for large collections. Also, the design is not completely protected from race conditions. For more information, see Synchronization in Generic Collections.

The collection classes introduced in the .NET Framework 2.0 are found in the System.Collections.Generic namespace. These include List , Dictionary , and so on. These classes provide improved type safety and performance compared to the .NET Framework 1.0 classes. However, the .NET Framework 2.0 collection classes do not provide any thread synchronization; user code must provide all synchronization when items are added or removed on multiple threads concurrently.

We recommend the concurrent collections classes in the .NET Framework 4 because they provide not only the type safety of the .NET Framework 2.0 collection classes, but also more efficient and more complete thread safety than the .NET Framework 1.0 collections provide.

Fine-Grained Locking and Lock-Free Mechanisms

Some of the concurrent collection types use lightweight synchronization mechanisms such as SpinLock, SpinWait, SemaphoreSlim, and CountdownEvent, which are new in the .NET Framework 4. These synchronization types typically use busy spinning for brief periods before they put the thread into a true Wait state. When wait times are expected to be very short, spinning is far less computationally expensive than waiting, which involves an expensive kernel transition. For collection classes that use spinning, this efficiency means that multiple threads can add and remove items at a very high rate. For more information about spinning vs. blocking, see SpinLock and SpinWait.

The ConcurrentQueue and ConcurrentStack classes do not use locks at all. Instead, they rely on Interlocked operations to achieve thread-safety.

Because the concurrent collections classes support ICollection, they provide implementations for the IsSynchronized and SyncRoot properties, even though these properties are irrelevant. IsSynchronized always returns false and SyncRoot is always null ( Nothing in Visual Basic).

Firstly, i’ll explain a short scenario;

As a signal from certain devices triggers, an object of type Alarm is added to a queue. At an interval, the queue is checked, and for each Alarm in the queue, it fires a method.

However, the problem i’m running into is that, if an alarm is added to the queue whilst it’s being traversed, it throws an error to say that the queue has changed whilst you were using it. Here’s a bit of code to show my queue, just assume that alarms are being constantly inserted into it;

So my question is, how do i make this more. thread safe? So that i won’t run into these issues. Perhaps something along the lines of, copying the queue to another queue, working on that one, then dequeueing the alarms that were dealt with from the original queue?

edit: just been informed of concurrent queue, will check this out now

4 Answers 4

Alternatively, you could use:

The enumeration represents a moment-in-time snapshot of the contents of the queue. It does not reflect any updates to the collection after GetEnumerator was called. The enumerator is safe to use concurrently with reads from and writes to the queue.

Thus, the difference between the two approaches arises when your DeQueueAlarm method is called concurrently by multiple threads. Using the TryQueue approach, you are guaranteed that each Alarm in the queue would only get processed once; however, which thread picks which alarm is determined non-deterministically. The foreach approach ensures that each racing thread will process all alarms in the queue (as of the point in time when it started iterating over them), resulting in the same alarm being processed multiple times.

If you want to process each alarm exactly once, and subsequently remove it from the queue, you should use the first approach.

A project I’m working on uses multiple threads to do work on a collection of files. Each thread can add files to the list of files to be processed, so I put together (what I thought was) a thread-safe queue. Relevant portions follow:

However, I am occasionally segfaulting inside the if (. wait_for(lock, timeout) == std::cv_status::no_timeout) < >block, and inspection in gdb indicates that the segfaults are occurring because the queue is empty. How is this possible? It was my understanding that wait_for only returns cv_status::no_timeout when it has been notified, and this should only happen after FileQueue::enqueue has just pushed a new item to the queue.

8 Answers 8

According to the standard condition_variables are allowed to wakeup spuriously, even if the event hasn’t occured. In case of a spurious wakeup it will return cv_status::no_timeout (since it woke up instead of timing out), even though it hasn’t been notified. The correct solution for this is of course to check if the wakeup was actually legit before proceding.

The details are specified in the standard §30.5.1 [thread.condition.condvar]:

—The function will unblock when signaled by a call to notify_one(), a call to notify_all(), expiration of the absolute timeout (30.2.4) specified by abs_time, or spuriously.

Returns: cv_status::timeout if the absolute timeout (30.2.4) specifiedby abs_time expired, other-ise cv_status::no_timeout.

It is best to make the condition (monitored by your condition variable) the inverse condition of a while-loop: while(!some_condition) . Inside this loop, you go to sleep if your condition fails, triggering the body of the loop.

This way, if your thread is awoken—possibly spuriously—your loop will still check the condition before proceeding. Think of the condition as the state of interest, and think of the condition variable as more of a signal from the system that this state might be ready. The loop will do the heavy lifting of actually confirming that it’s true, and going to sleep if it’s not.

Оцените статью