-
Notifications
You must be signed in to change notification settings - Fork 0
Multithreading
Multithreading is a complicated topic, but it is essential to have an in-depth understanding to be a successful Java developer.
A Thread is a fundamental part of the JVM and represents a single 'thread' of execution. A thread executes methods, and maintains a record of the execution stack (which methods are currently executing) as it progresses.
Each thread has it's own dedicated memory called Stack Memory. This memory records the execution stack and the primitive variables and object pointers currently active in each method within the stack. The stack memory is limited per thread, and this limit can be set using a JVM parameter (-Xss or -XX:ThreadStackSize).
Each thread of execution is represented by an instance of Thread. Running code in parallel can be performed in a number of ways, but ultimately reduces something similar to the following:
Runnable codeToRun = ...
Thread thread = new Thread(codeToRun);
thread.start();To prevent two threads from executing the same piece of code, we using locking or synchronization.
For simple situations we can use the synchronized keyword on a method or within a method.
private final Object monitor = new Object();
private int counter = 0;
// Only a single thread can execute with a synchronized block (for a given monitor)
public void increment() {
synchronized(monitor) {
counter++;
}
}We can suspend a thread within a synchronized block if it needs to wait for another thread to perform an operation before it can continue by using the wait/notify mechanism.
private final Object monitor = new Object();
// Wait
synchronized(monitor) {
while(true) {
// Suspend thread (until notify called by another thread)
monitor.wait();
// When notify is called however, we must still check whether or not we must continue to wait (hence the loop)
}
}
// Notify
synchronized(monitor) {
// Perform an operation then notify a single waiting thread
monitor.notify();
// Alternative perform an operation then notify all waiting threads!
monitor.notifyAll();
}Unfortunately synchronization like this is quite simple and restrictive and can result in poor performance.
Instead we can uses lock objects which provide much greater flexibility.
private final Lock lock = new ReentrantLock();
private int counter = 0;
public void increment() {
lock.lock();
try {
counter++;
} finally {
lock.unlock();
}
}Although this is much more verbose, it is very flexible in usage as the lock does not need to be locked and unlocked in the same method
The greatest flexibility is afforded by the ReentrantReadWriteLock. It provides two locks, a read lock that allows simultanious access for any number of threads, and an associated write lock by contrast only allows access to a single thread at a time (like the synchronized block), while also preventing any other thread aquiring the read lock.
private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
private int counter = 0;
// Any number of threads can access this simultaniously (unless the write lock is active)
public int get() {
lock.readLock().lock();
try {
return counter;
} finally {
lock.readLock().unlock();
}
}
// Only a single thread can access this method (and it blocks all write lock access until complete)
public void increment() {
lock.writeLock().lock();
try {
counter++;
} finally {
lock.writeLock().unlock();
}
}The equivalent of wait/notify for locks is called a Condition. You can create more than one for each lock and they provide the same operations.
| Operation | Code |
|---|---|
| Wait | condition.await() |
| Notify | condition.signal() |
| Notify All | condition.signalAll() |
See the source code for LinkedBlockingQueue for the classic example of using multiple conditions with a single lock.
In locking, fairness determines whether threads are guaranteed to queue up and aquire a lock in the order they arrived. By default locks are not fair as it is more performant. A non-fair lock allows threads to attempt to aquire the lock as they arrive, which means that a thread could steal the lock if it arrives just as the lock is released, regardless of how many other threads are waiting. Fair locking disables this optimisation, ensuring threads only aquire the lock in the order in which they arrived chronologically.
All synchronized blocks are use non-fair locking. Also Locks are non-fair in the default constructor, however an alternative constructor provides the option to enable fairness.
Reentrancy is simply the concept of allowing a lock to be aquired more than once by a thread that already holds it. A counter marks how many times the thread has aquired the same lock. Each call to release the lock decrements the counter, but it is only actually released when the counter reaches zero. All locks and the synchronized block are reentrant.
The ExecutorService is a useful way to create a pool of one or more threads to execute code in parallel.
// Create a pool of 10 threads
ExecutorService pool = Executors.newFixedThreadPool(10);
// Submit tasks to the pool to run in parallel
// In this case if 10 tasks are already running, additional tasks will be queued for execution
pool.submit(....);
// If we want to finish using the pool, we can shut it down so it won't accept new tasks ...
pool.shutdown();
// ... and wait for all existing tasks to finish
pool.awaitTermination(30, SECONDS);The Executors utility provides a number of static factory methods to create executor services, including:
| Description | Code |
|---|---|
| A fixed-size thread pool | Executors.newFixedThreadPool(int threads) |
| A thread pool with only one thread | Executors.newSingleThreadExecutor() |
| An unbounded thread pool that grows as tasks are added | Executors.newCachedThreadPool() |
| An unbounded thread pool that can schedule tasks | Executors.newScheduledThreadPool(int corePoolSize) |
| An unbounded thread pool that creates a virtual thread per task | Executors.newVirtualThreadPerTaskExecutor() |
Use of locking when threads interact with shared data is expensive. There is however a lock-free mechanism to update shared data that provides thread-safe atomic operations.
The Atomic Operations package contains a number of variables that are lock-free, including:
| Description | Code |
|---|---|
| A primtive int | AtomicInteger |
| A primitive long | AtomicLong |
| A primitive boolean | AtomicBoolean |
| An Object reference | AtomicReference |
| A primitive long array | AtomicLongArray |
| An Object reference array | AtomicReferenceArray |
When multiple threads are executing in parallel, it is often necessary to coordinate between them.
| Description | Code |
|---|---|
| CountDownLatch | A latch initialised with a counter on which threads wait and which releases all waiting threads when the counter reaches zero |
| CyclicBarrier | Allows threads to be suspended waiting on the barrier until a specific number of threads have arrived, at which point all are released |