The number one criticism that Java has faced since its inception is performance. We are in an industry that's obsessed, often to a fault, by performance. So, with each release of its Java JDK, Sun has not only introduced new features but also significantly improved performance.
In JDK 1.4, Sun addressed two areas of performance, one obviously critical and one not so obviously critical. The obviously critical need was to improve I/O performance, which is the focus of this month's Java Journal. The less obviously critical need was to improve the performance of Reflection. Reflection has become an integral part of most Java frameworks as well as most tools such as IDEs, so the bottlenecks that plagued the early implementation of the Reflection API were very costly--though most developers were blissfully unaware of them. But the improvement to the Java Reflection API is a topic for another article. So on to NIO, which stand for New I/O. I have heard it pronounced two ways. The first way sounds like "neo." The other way, each letter is pronounced, like en-eye-oh. The former is what I have most often heard, and being a Matrix fan, it's what I prefer.
Setting the Stage
Why is I/O performance so important? Well, its all about orders of magnitude. If we look at a typical computer system--whether it be from the past or present--we find a common ratio between CPU performance and I/O performance. Specifically, CPU performance is at least two orders of magnitude faster than I/O performance. Even a slight increase in I/O performance can have a drastic, positive effect on overall performance. Anybody who has used Java to do large amounts of I/O pre-JDK 1.4 knows that there was significant room for improvement. In my business, digital video, we are usually dealing with files in multi-gigabyte range, and processing batches of files of that order of magnitude can be painfully slow. So why were the pre-JKD 1.4 implementations of I/O so slow? It all ties back to one of my favorite topics, tradeoffs. The classic tradeoff in computing is time vs. space. We can almost always do something in less time if we have more space, and vice versa. Another tradeoff, which is less understood, is performance vs. understandability.
Let's examine an extreme example. Say we need to find all the prime numbers between 2 and 1,000,000. We could write two programs to do this, one in assembly and one in Java. Assuming we have two equally talented programmers in their own respective fields doing the work, the assembly language program is going to be faster. Why? Because layers of abstraction are expensive. The assembly programmer is dealing with one layer of abstraction at most (even assembly isn't quite machine code). The Java programmer is dealing with many layers of abstraction--Java code mapping to Java byte code mapping to machine code, and this still doesn't take into account the several layers of abstraction that the Java programmer is probably taking advantage of within the Java language itself. What these layers of abstraction buy us is understandability.
OK, some will argue that if they are equally talented, the assembly language programmer will be able to read the assembly code as easily as the Java programmer can read the Java code. I agree. However, there are two interesting scenarios to think about. First, the assembly programmer can probably read and understand the Java program, but the reverse will not be true and will probably just lead to the mass consumption of Advil and comments like "this is the exact reason why I program in Java." Second, let's say our application is something slightly more complicated than a prime number locator, something like income tax preparation software, which if written in assembly might be as incomprehensible as the actual tax law that it is based on.
All this said, what we want to do is look for tradeoffs where we give a little on one side but gain more on the other.
When we are dealing with I/O, there are two things we want to do or not do as the case may be. First, we want to actually move data as few times as possible. This sounds rather intuitive but can actually be difficult to implement. Second, when we do move data, we want to move it in the most efficient manner possible. To accomplish efficient moves, we want to move data that is in pieces that are multiples of page sizes. For NIO, this is exactly what Sun did. Without getting into all the gory details, I'll just say that Sun collapsed the number of abstraction layers between Java code and machine code.
Pre-JDK 1.4, the solution to I/O and other Java performance problems came in the form of the Java Native Interface (JNI), which allows you to call non-Java, platform-specific code from within a Java Virtual Machine. JNI is still a necessary part of the JDK 1.4, but great care should be exercised when using JNI. After all, JNI calls do tie Java applications to specific operating systems, which breaks the fundamental Java paradigm of Write Once Run Anywhere (WORA).
Java NIO API
NIO gives us many new technologies as well as new flavors of old favorites. This article focuses on buffers, but we will also dabble a little in channels, file locking, memory mapped files, sockets, selectors, and character sets. NIO was the end product of Java Specification Request (JSR) 51. In addition to the core I/O technologies listed above, JSR 51 also produced a powerful regular expression engine. However, since regular expressions aren't really an I/O topic, I will not go into details here.
Buffers
NIO gives us a very robust hierarchy of Buffer classes. Buffers are the basic building blocks of I/O. The basic operations for buffers are filling and draining. The NIO Buffer class hierarchy consists of a single abstract base class fittingly enough called "Buffer." For each of the primitive Java types--byte, char, double, float, int, long, and short--there is a derived class, where the class name is of the form
0 <= mark <= position <= limit <= capacity
Why must this always be true? Well, limit must be less than or equal to capacity because otherwise we would have broken the laws of time and space by putting something that is bigger than itself inside itself. Position must be less than or equal to the limit; that way, we are always dealing with valid elements. Next is the subtle one; mark must be less than or equal to position. The implementation of mark only allows it to be used for backing up in a buffer, as opposed to going forward. Since mark can only be set to the current position, the only way that this relationship can be violated is by explicitly setting position to be less than mark and then making a call to reset(), which sets position to the value of mark. In this case, reset() would throw a java.nio.InvalidMarkException.
Creating buffers poses a paradox. We want to be able to have our new Buffer classes manage the data, but we also want to copy the data as few times as possible. The Buffer classes solve this in an innovative way. All of the Buffer classes are abstract classes, meaning you can't create them directly using the standard constructor mechanism, but you can instantiate them by invoking any one of several static methods. If we don't already have data, we can instantiate a new buffer by calling allocate(), which uses a single int argument to set the capacity of the buffer. Note that this limits the size of a buffer to 2,147,483,648 elements, which sounds large, but most of the time when I'm using buffers, I'm using a ByteBuffer, and this puts the limit at 2GB of data, which sometimes is less than I need. As mentioned, sometimes we already have the data in an array and want to use this data in a buffer but don't want to take the hit for copying all the data into the buffer. No problem. The Buffer classes offer a solution for this situation: We can instantiate a new buffer by calling wrap() and supplying an array full of data as an argument. The Buffer class will then use this array as the storage for the buffer. Since the buffer truly is just wrapping the array, the underlying data can be manipulated through either the array interface or the buffer interface. This may not be the best academic solution, but it certainly makes things more efficient. There are also several methods for duplicating and slicing up buffers that are highly efficient and useful, but we will save these topics for a future article.
There is a hidden gem inside the NIO API for dealing with byte ordering. We touched on byte order briefly in "Java Journal: Fun with Parsing," where we needed to convert shorts and ints from little-endian notation to big-endian notation. Before NIO, when you needed to do endian conversion--either big to little or little to big--you had to roll your own like the little to big one below.
// EndianConverter.java
public class EndianConverter
{
public static int LEtoBE(short le)
{
return((le >> 8) & 0x00FF) | ((le << 8) & 0xFF00);
}
public static int LEtoBE(int le)
{
return (le >>> 24) | (le << 24) |
((le << 8) & 0x00FF0000) | ((le >> 8) & 0x0000FF00);
}
}
Maybe this kind of code would appeal to the assembly programmer, but to me it just looks like an opportunity for bugs to creep into my system. NIO provides a ByteOrder class that you can use two ways. The primary use is to query the current VM's OS for its native byte order. The secondary use is to define the constants BIG_ENDIAN and LITTLE_ENDIAN, which can be used as arguments to the ByteBuffer class's method order() to set the byte order of the Buffer. Note that ByteBuffer is the only Buffer class with the order() method.
All of these methods of dealing with Buffers are convenient and efficient to implement, as opposed to String classes and arrays. However, the true power of Buffers is derived from how they deal with Bulk Moves. Bulk Moves are optimized for moving large amounts of data at once. Instead of cycling through an iterator over an array and copying data one piece at a time, Bulk Moves allow us to copy everything to an array at once. This allows the underlying implementation of the Buffer class to take advantage of any hardware- or operating system-level I/O routines that might exist for the type of data move being performed.
Channels
Channels are the abstraction for how data gets from place to place. We pack data into buffers and then pack the buffers into channels. There are several species and subspecies of channels. They can be readable, writeable, socket-based, file-based, interruptible, and selectable as well as several other possibilities (some of which are rather esoteric). Most channels are specified as interfaces rather than abstract classes, which allow for implementations to be written in native code and also allow for the easy implementation of multiple interfaces. Channels are a completely new concept in Java I/O.
File Locking
File locking allows us to specify that we want either an exclusive or a shared lock on a file. Since not all operating systems support shared locks, requests for shared locks on operating systems that don't support them are treated as exclusive locks. If your operating system doesn't support exclusive locks, you should look for a new operating system. File locking is also a completely new concept in Java I/O.
Memory Mapped Files
Think of memory mapped files as doing for file I/O what virtual memory does for physical and user memory at the operating system level. With memory mapped files, we can treat the entire file as if it were in a ByteBuffer. Even if the file is much larger than the amount of memory we have available to us for creating our ByteBuffer, the memory mapped file class handles all the swapping of data to and from the file system for us--and probably much more efficiently than we would have. This is a perfect example of how NIO works; it gives us an abstraction that is both easy to use and more efficient than a hand-coded solution.
Sockets and Selectors
Sockets (or more accurately, socket channels) are not as new to Java I/O as some of the other concepts. Sockets were included in previous Java versions in the java.net package. However, in NIO, we combine sockets with the concept of selectors, which are a way of monitoring multiple channels at once. The result is that we get the abilities to do non-blocking I/O and to manage many simultaneous connections within a single thread, solving the traditional problem of server-side Java I/O scalability.
Character Sets (Charsets)
Although Java has been a pioneer in the area of internationalization and is one of the only languages to support Unicode natively, there was still a void to fill in translations between various character set encodings. NIO provides an API for standard character set transcodings as well as a standardized interface for creating your own encodings. Why you would ever want to do that I can't guess, but at least now you have the flexibility to do it.
Wrapping It All Up
NIO provides a much-needed boost to Java application performance by providing new and improved classes and interfaces for managing various types of I/O operations. The Buffer classes are a convenient way of handling data, and we examined the relationships and nuances of its attributes. The goal of I/O is to move data as few times as possible, which often conflicts with layers of abstraction. NIO provides new abstractions rather than layers to create an interface that is both efficient and easy to use. The hidden gem ByteOrder class allows us to easily deal with the complexities of byte ordering without having to write our own solution. The Bulk Moves are the main motivation behind using Buffer classes because they allow us to take advantage of hardware- and operating system-level I/O accelerations. Channels, sockets, and selectors all collaborate to give us scalable I/O, and there have been some nice enhancements to file I/O in the areas of file locking and memory mapped files. Overall, the best thing about NIO is that Sun has been able to pull off the difficult task of coupling a good abstraction with higher performance and thus has been able to at least partially address Java's number one criticism.
For more information, check out Java NIO by Ron Hitchens.
Michael J. Floyd is the Software Engineering Manager for DivXNetworks. He is also a consultant for San Diego State University and can be reached at
LATEST COMMENTS
MC Press Online