Understanding Synchronization in .NET
The Problem: Shared Data and Race Conditions
Imagine two threads incrementing a shared counter:
int counter = 0;
void Increment()
{
for (int i = 0; i < 1000; i++)
counter++;
}
If both threads run Increment() simultaneously, you might expect the final result to be 2000. In reality, you’ll likely get something less. The reason is that counter++ is not atomic—it involves reading, modifying, and writing back the value, allowing another thread to intervene in between.
lock and Monitor
The simplest way to synchronize access to a shared resource is by using the lock keyword, which internally uses Monitor.Enter and Monitor.Exit. The lock statement ensures that only one thread can enter a critical section at a time, preventing other threads from accessing the same resource until the lock is released.
private readonly object _syncLock = new();
void SafeIncrement()
{
lock (_syncLock)
{
counter++;
}
}
Under the hood, lock is syntactic sugar for:
Monitor.Enter(_syncLock);
try
{
counter++;
}
finally
{
Monitor.Exit(_syncLock);
}
Monitor also supports advanced operations like waiting and signaling between threads:
Wait()– temporarily releases the lock and puts the thread in a waiting state until it’s signaled.Pulse()– notifies one waiting thread that it can proceed.PulseAll()– wakes up all waiting threads, allowing them to compete for the lock again.
Semaphore and SemaphoreSlim
A semaphore limits the number of threads that can enter a critical section. This is useful for throttling concurrent operations, such as database queries or file access.
SemaphoreSlim semaphore = new(3); // Allow up to 3 concurrent threads
async Task AccessResourceAsync()
{
await semaphore.WaitAsync();
try
{
await Task.Delay(1000); // Simulate work
}
finally
{
semaphore.Release();
}
}
SemaphoreSlim is a lightweight version designed for async/await scenarios, while Semaphore works with traditional blocking threads.
Interlocked
For simple atomic operations, using a full lock is overkill. The Interlocked class provides low-level atomic operations that help avoid race conditions with minimal overhead.
int counter = 0;
void Increment()
{
Interlocked.Increment(ref counter);
}
Here’s what each method does:
Increment– adds one to a variable atomically.Decrement– subtracts one from a variable atomically.Add– adds a specific value atomically.Exchange– replaces the current value with a new one atomically.CompareExchange– updates a value only if it matches a specified comparison value, ensuring thread-safe conditional updates.
ReaderWriterLockSlim
Sometimes multiple threads only need read access, and locking them all out unnecessarily reduces performance. ReaderWriterLockSlim allows multiple readers concurrently, but only one writer at a time.
ReaderWriterLockSlim rwLock = new();
List<int> numbers = new();
void AddNumber(int n)
{
rwLock.EnterWriteLock();
try
{
numbers.Add(n);
}
finally
{
rwLock.ExitWriteLock();
}
}
int GetCount()
{
rwLock.EnterReadLock();
try
{
return numbers.Count;
}
finally
{
rwLock.ExitReadLock();
}
}
This lock is ideal when reads are frequent and writes are rare, offering better throughput than a simple lock.
Choosing the Right Primitive
| Scenario | Recommended Primitive |
|---|---|
| Protecting a shared resource | lock / Monitor |
| Limiting concurrent access | SemaphoreSlim |
| Performing simple atomic operations | Interlocked |
| Many readers, few writers | ReaderWriterLockSlim |
| Coordinating thread signaling | Monitor |
Final Thoughts
Synchronization is one of the most critical—and error-prone—parts of concurrent programming. Always aim to minimize shared mutable state, and choose the simplest synchronization primitive that gets the job done.
If you want to dive deeper into thread safety and synchronization best practices, check out these excellent resources:
References:


