22
Sun, Dec
3 New Articles

Threads 101

Java
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Threads, in the computer science sense, provide a way for applications to split themselves into several tasks, each running logically (and depending on the underlying hardware, potentially even physically) in parallel. While they may seem similar at first glance, a thread is far different from a process in that a process generally maintains a lot of state information (i.e., in its process control block), has its own address spaces, and typically only interacts with other processes via inter-process communication (IPC) constructs (e.g., sockets). On the other hand, threads share state information of a single process (although each thread usually has its own mini process control block) and directly share memory and resources. Context-switching (i.e., the act of swapping an executing entity with a waiting entity) also typically occurs much faster between threads than between processes. As such, threads are often referred to as lightweight processes used to increase the performance and responsiveness of an application.

Why Would I Want to Use Threads?

There are a variety of reasons to employ threads. Since threads can execute tasks in parallel, programs can finish more work per unit of execution time, thus increasing the overall performance of the program. For example, imagine that you have two sets, x and y, of numbers that need to be sorted. One solution would be to sequentially sort x, sort y, and then complete. This solution is fine if x and y are smaller sets, but this is grossly ineffective if each sort operation takes 30 minutes. Instead, we could create two threads that sort x and y simultaneously and complete when both sort operations complete. By doing so, our program might complete in half of the original time. It's worth noting that spawning n threads does not result in 1/n of the original running time as there is some additional overhead in thread management, and at a certain point, scalability issues arise.

In general, there are four primary benefits of using threads:

  • Responsiveness—Multithreading an interactive application may allow a program to continue running even if part of it is blocked or is performing a length operation, thereby increasing responsiveness to the user. For example, a multithreaded Web browser could allow user interaction in one thread while a large table is being loaded in another thread.
  • Resource sharing—By default, threads share the same memory and the resources of the process to which they belong. The benefit of resource-sharing is that it allows an application to have several different threads of execution within the same address space.
  • Economy—Allocating memory and resources for process creation is costly. Alternatively, because threads share resources of their parent process, it's far more economical to create and context-switch threads than processes.
  • Utilization of multiprocessor architectures—The benefits of multithreading can be greatly increased in a multiprocessor architecture, where each thread may be running in parallel on a different processor. A single-threaded process can run on only one CPU, regardless of how many physical processors are available. Multithreading on a multi-CPU machine increases concurrency and decreases runtime.

Depending on the type of application you're creating, the above benefits will carry different weights. For example, if you're creating a GUI, responsiveness is absolutely critical to keeping users happy; everyone gets disgruntled when the GUI freezes up for long periods of time. However, if you're writing programs on enterprise-level systems, it's probably important to make effective use of its multiprocessor architecture and harness all those CPU cycles.

What About Single-Processor Systems?

A common misconception is that, because a system only has one processor, threading won't help since the system can do only one thing at a time anyway. While it's certainly true that a processor can execute only one instruction at a time, threading applications on single-processor systems can still have dramatic impacts. The majority of applications spend most of their time waiting on I/O operations (e.g., user input, disk, network, etc.), so while one thread is blocked on an I/O operation, another thread may be performing useful work (or waiting on I/O from another source).

That Seems Easy. What's the Difficulty?

If you're new to threads, you're probably thinking, "This seems easy enough. I'm going to start multithreading my programs." While there are certainly a lot of opportunities to use threads, there are also a lot of (often hidden) challenges that present themselves when working in a multithreaded environment.

First and foremost, testing and debugging multithreaded programs can be extremely difficult. Threads introduce a whole new slew of challenges. With multiple threads of execution (instead of simply one), the overall sequence of events isn't guaranteed to match from one execution to the next (and usually never does). In a nutshell, this means that your program, all things equal, may appear to work in some cases and not in others because of the manner in which the threads are context-switched. This effect is alternatively known as a "race condition" and is described further later in this article.

Another common problem in multithreaded programs is a condition known as "deadlock." A deadlock occurs when a cycle of wait dependencies occurs between threads. For example, imagine that thread x is waiting for thread y to finish its current operation and thread y is waiting for thread x to finish its current operation. See the problem? This is known as a deadlock because neither thread x nor thread y will ever advance its execution since they're both engaged in a cycle of waits. Deadlocks can be very difficult to detect and can be hard to reproduce, since they're often dependent on a specific sequence of events occurring at just the right times (i.e., race conditions).

Because threads share the same address space, data incoherency is another major problem. That is, if you don't use the right techniques (which I'll discuss later) to access shared data between two or more threads, the data integrity of the program can (and believe me, it will be) compromised as a result of the threads "stepping on each other's toes."

The use of a good program debugger is often needed to resolve these types of problems. For example, the Eclipse-based Java debugger has excellent support for debugging multithreaded programs.

Thread Support in Java

Now that we've covered some of the basics of threads and their accompanying pros and cons, let's take a look at threads in the Java world. Because Java was designed with threads in mind, it has a very rich thread API set. Every thread extends from the java.lang.Thread class. The thread class supports a variety of methods, but these are some of the most commonly used methods:

  • Thread(String threadName)—This version of the constructor instantiates a new thread with the specified name. Naming threads can be useful when debugging as it provides a mechanism by which the threads can easily be disambiguated. It's also important to note that all data members of an instantiated thread belong to the thread itself; this effectively provides a thread local storage (TLS) mechanism, which is somewhat tedious to deal with in languages such as C.
  • public void run()—This method will be internally invoked by the Java Virtual Machine (JVM) when the thread's start method is invoked. In essence, this is the thread equivalent of the main method for the main program entry point.
  • public void start()—Invoking this method causes the thread to start execution. Prior to invoking this method, the thread is in an inactive state.
  • public void join()—This method can be used to force the currently executing thread to wait for the thread on which the join method is invoked to finish its execution. This is effective for creating a rendezvous point of sorts.
  • public static void sleep(long millis)—This static method can be invoked to force the currently executing thread to sleep for the specified time.

Putting the pieces together, here's a simple program that defines a thread class that simply counts from 0 to 100, printing each number to standard out. The program will spawn 10 threads to perform their count operations in parallel and then wait for the threads to finish before terminating.

import java.util.*;

public class MyThread extends Thread {
      public MyThread() {
super("MyThread"); // give the thread a useful name
}

       // The run() method contains the thread's runtime logic
       public void run() {
             for(int i = 0; i < 100; ++i) {
                   System.out.println("Counter: " + i);
             }
       }

       public static void main(String[] args) {
             List spawnedThreads = new ArrayList();

      for(int i = 0; i < 10; ++i) {
            // create an instance of our thread
            Thread t = new MyThread();
            // maintain a list of the threads
            spawnedThread.add(t);
            // instruct the thread to start
            t.start();
      }

      for(Thread t : spawnedThreads) {
            try {
                  // wait for the thread to end
                  t.join();
            } catch(InterruptedException ie) { }
       }

       System.out.println("Program terminating.");
       }
}


That's all there is to it: Extend the Thread class, complete the implementation of the run method, and invoke the start method. Your thread's off and running! One noteworthy point here is how threading an application affects exception handling. In the above example, the main thread (i.e., the thread that all programs implicitly contain) spawns several worker threads. Any exceptions generated within the worker threads' call stacks don't percolate back to the main thread. The run method of each thread is effectively the bottom-most level within its call stack, which is the lowest point to which an exception can be thrown. Since the example above is trivial, there's not much room for error.

Let's take a look at some other thread terminology and constructs to help us in more complex cases.

Multithreaded Programming Terms and Techniques

When working in a multithreaded environment, it's important to have a fundamental understanding of some of the terminology. This section introduces some of the commonly encountered multithreaded terms.

Critical Section

A critical section is a block of code that accesses some resource (e.g., a data structure) that will be shared among several threads of execution, but with the limitation that only a single thread should be able to execute the block at any given time (i.e., it should be executed mutually exclusively). The shared piece is the key here. Imagine an array-based linked list structure. When an item is inserted into the array, some positional index is modified to prepare for the next insert. If two or more threads are logically within the insert operation, there's the potential for the positional index to become corrupted. In general, this is where it's critical to pay attention to the "thread-safetiness" of any APIs your code uses. That is, if they're not thread-safe, you'll have to call the APIs within a critical section to ensure the integrity of your application.

A mistake programmers new to multithreaded programming often make is being too liberal with locking mechanisms. For example, take the following simple method that returns an array of floats containing the areas of the specified triangles (let's also assume that once a triangle is created, it's not modified):

public static float[] getArea(Triangle[] triangles) {
      float[] ret = new float[ triangles.length ];

      for(int i = 0; i < ret.length; ++i)
            // Compute the area as ½ the base * the height
            ret[i] = 0.5f * triangles[i].getBase() * 

triangles[i].getHeight();

      return ret;
}


Is this method thread-safe without the use of any additional locking mechanisms? The answer is yes. The method's not accessing any shared resources; rather, it deals only with arguments and local variables. Each thread has its own private stack on which all of the arguments and local variables exist. This means that should any two threads be within the scope of the getArea method, they'll both be working with their own copies of the variables as they're context-switched in and out. However, if the method accessed a static global variable, you'd need to include additional thread-safety mechanisms, which will be discussed later in this section.

Race Condition

A race condition is the result of a flaw in the design of a concurrent program whereby the output of the program is somehow dependent on the order in which the program's events occur. A classic example illustrating a race condition is a program that contains two threads whose jobs are simply to increment some global integer counter (e.g., via i = i + 1). If the counter is initially 0, you would expect the final output to be 2. However, if the program runs without proper thread synchronization, it's possible that the first thread executed the i = i portion of the expression, then the second thread was context-switched in and executed i = i + 1 (so i is now 1), and then the first thread resumed (but it already loaded i when it was 0) and completed the expression. In this case, the final value is (incorrectly) 1.

Semaphore

Proposed by E. Dijkstra in the 1960s, a semaphore is now considered a primitive construct in multithreaded programming. It's essentially an overloaded counter supporting two basic operations, P (also called wait) and V (also called signal). The P operation decrements the counter, and if it's less than 0, the current thread will block on the semaphore. The V operation increments the counter, and if it's less than or equal to 0 (i.e., there are still threads waiting to acquire the semaphore), one of the blocked threads is woken up and allowed to progress. Here's an example implementation of a semaphore in Java:

public class SimpleSemaphore {
      private int counter;

      public SimpleSemaphore(int counter) {
            this.counter = counter;
      }
      public synchronized void P() throws InterruptedException {
            counter--;
            if(counter < 0)
                  wait();
            }
      public synchronized void V() {
            counter++;
            if(counter <= 0)
                  notify();
      }
}


A semaphore can be used for several purposes. Its most frequent use is to protect access to critical sections. A semaphore whose initial value is 1 (also called a binary semaphore or a mutex) can be used to ensure a block of code is executed mutually exclusively from other threads. For example, let's revisit the example above in the race condition definition. If a shared binary semaphore S was used such that S.P (i.e., obtain the lock) was invoked before the increment and S.V (i.e., release the lock) was invoked after the increment, then the second thread would be blocked until the first thread released the lock. Thus, the final value would be 2, and the race condition would have been completely eliminated.

Another common use for a semaphore is a semaphore S whose initial value is 0, also called a "blocking semaphore." That is, the first call to S.P results in a block operation. When another thread calls S.V, the blocked thread can progress. This is useful in controlling the order in which events occur in a threaded environment.

While semaphores are much-needed and excellent problem solvers in multithreaded programming, they can cause problems if used incorrectly. For example, should any thread attempt to acquire a lock on a binary semaphore to which it already has the lock, a deadlock will occur. That said, pay extra attention when you're acquiring and releasing semaphore locks to avoid these difficult-to-detect problems. A good word of advice is to always signal a semaphore in a finally { } block because even in the event of a RuntimeException (i.e., a special exception class that doesn't need to be explicitly caught), the semaphore will still be signaled. For instance, the logic might look something like this:

sem.P();
try {
       // Critical section logic here...
}
finally {
       // Guarantees that the lock will be released
       sem.V();
}


Prior to Java 1.5, if you wanted to use a semaphore, you had to construct your own (using code similar to the example provided above). However, with the 1.5 release of Java and the addition of the java.util.concurrent package, a very feature-rich semaphore class is provided out of the box.

Thread Pool

A thread pool is a group of threads to which tasks can be submitted. Since creating and destroying threads can carry additional overhead (albeit nowhere near to that of creating and destroying processes), the idea is to create a (fixed) pool of threads that continuously run, waiting to perform work. With the release of Java 1.5, there's now java.util.concurrent.Executors, a class that supports several static methods to create all kinds of thread pools.

Java "Synchronized" Keyword

The "synchronized" keyword in Java serves as a primitive (in Java) lock acquisition and release mechanism. Every object in Java has a hidden lock associated with it, accessible only through the synchronized keyword. For example, an object's method could be written as follows:

public synchronized void doSomething() { ... }

Using the synchronized keyword on the method's signature ensures that if more than one thread at a time tries to execute the doSomething method, subsequent threads will be blocked until the first thread is completed, at which time only one other thread will be allowed to enter the method. This cycle repeats itself until all threads have (serially) executed the doSomething method. This effectively makes the entire body of the doSomething method a critical section. However, this use of synchronized often has side effects that not everyone's aware of. Let's say the object supports another method whose logic is completely unrelated to the doSomething method's logic:

public synchronized void doSomethingElse() { ... }

Using the synchronized keyword in this fashion will not only prevent any two threads from executing each method, but also prevent any two threads from executing the doSomething method and the doSomethingElse method at the same time! While functionally everything will work as expected, this solution has the unwanted side effect of slowing down the entire system. This is because the same lock is used between the two methods. So how can we fix this (without using another locking mechanism, such as a semaphore)? We can use another form of the synchronized keyword that allows us to obtain and release locks on user-specified objects:

// Create two static final dummy lock objects 

// Note that static objects refer to the same object across all 

instances of an object 

private static final Object LOCK_1 = new 

Object(), LOCK_2 = new Object();  

 

public void doSomething() {
// Obtain the lock on the first static object
synchronized(LOCK_1) { ... }

 

public void doSomethingElse() {
// Obtain the lock on the second static object
synchronized(LOCK_2) { ... }

}


In this case, we've explicitly defined two different lock objects. This way, we now get the effect we want; no two threads can execute a particular method in parallel, but two threads can execute doSomething and doSomethingElse in parallel.

The curious reader might wonder what would happen if the doSomething method was recursive (or called any other method attempting to acquire a lock on LOCK_1). Would a deadlock occur? The answer is unexpectedly no! That's because Java's synchronized locking mechanism supports lock escalation. That is, if any thread T holds a lock L, any subsequent request T makes to acquire L will succeed via lock escalation (this is a major difference between synchronized and semaphores). While this may or may not be desirable, it's definitely something you need to be aware of.

Threading the Pieces Together

If this seems like it's a lot of information to digest, it's because it is! Threads in a programmer's tool belt are similar to a hammer in a carpenter's tool belt. If used correctly, they can save you a lot of time, and if used incorrectly can cause you a wealth of problems. Should threads be used in all applications? Definitely not. We've seen that although they can increase performance and responsiveness, they make applications much more difficult to test and debug and can introduce an entirely new classification of problems that even the best compilers can't assist you with. However, in certain applications, their use is absolutely necessary and some general thread background knowledge is a huge asset to any development team. As is true with anything, knowing when to use them and identifying the potential areas of conflict comes with experience; practice makes (almost) perfect! If I could leave you with one piece of advice when dealing with threads, it'd be to think thoroughly through the paths of execution, and then do it again and again.

Joe Cropper is a Software Engineer at IBM in Rochester, Minnesota. Joe works as part of the System i Software Final System Test team, and his areas of expertise include database, multi-tiered client-server technologies, and a variety of object-oriented programming languages. Joe can be reached via email at This email address is being protected from spambots. You need JavaScript enabled to view it..

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: