Thread Safety in Multithreading
The Problem When multiple threads access the same object's data simultaneously, thread safety issues arise. For example: Solutions The most common solution to thread safety problems is using locks. For example: This automatically creates a lock based on the given object and waits for the block to finish executing. The lock is released at the end of the block. In the code above, the synchronization target is . This usually works fine because it ensures each object instance can run its synchronized methods without interference. However, reduces efficiency because all synchronized blocks sharing the same lock must execute in order. If locks are acquired frequently on , the program may have to wait for unrelated synchronized code to complete before it can proceed. Another option is using : Or use , which allows a thread to acquire the same lock multiple times without deadlocking. A better approach is GCD, which provides a simpler and more efficient way to synchronize code. Using on every property is inefficient because every synchronized block must wait for all other synchronized blocks to complete. In practice, we only need each property to synchronize independently. Additionally, this kind of thread safety is limited — for example, a thread might read a property twice in succession while another thread writes a new value in between. Using a serial synchronization queue — dispatching both read and write operations to the same queue — guarantees data consistency: For use in getters and setters: The idea: place both read and write operations in a serialized queue so all property accesses are synchronized. The locking is handled entirely within GCD, which has already made low-level optimizations. Since setters don't necessarily need to be synchronous — and the block setting the instance variable doesn't need to return a value — you can change it to: This changes the setter from synchronous to asynchronous execution, which can speed up the setter while reads and writes still execute in order. However, this might actually hurt performance in practice, because async dispatch requires copying the block. If the time spent copying the block is significantly longer than the time spent executing it, this approach will be slower than the synchronous version. It's worth considering only if the blocks being submitted to the queue perform heavy work. Multiple getters can run concurrently, but getters and setters cannot run concurrently with each other. We can exploit this by switching to a concurrent queue: The above actually doesn't achieve proper synchronization. All reads and writes execute on the same queue, but because it's a concurrent queue, they can run at any time interleaved. This can be fixed with GCD barriers: and . A barrier block in a queue must run exclusively — it cannot run in parallel with other blocks. This only makes sense for concurrent queues, since serial queue blocks always run one at a time. When a concurrent queue encounters a barrier block, it waits for all currently running concurrent blocks to finish, then executes the barrier block alone. After the barrier block completes, the queue resumes normal concurrent execution. Using a barrier block in the setter allows reads to remain parallel while writes execute exclusively: The diagram above shows a concurrent queue where reads are implemented with regular blocks and writes with barrier blocks. Reads can run in parallel, but writes must run exclusively. This implementation is faster than a serial queue. Additionally, the setter could also use a synchronous barrier block (), which might be even more efficient in some cases.