Implement these user-defined functions to save space when storing large amounts of data.
In modern applications, it's typical to store large amounts of text data, including emails, rich text documents, audit logs, and XML. Recording this information in a database table can impose a significant tax on disk space. However, using a compression algorithm can alleviate this tax. In this tip, I'll demonstrate two user-defined functions that implement ZLib compression to deflate and inflate data that is stored or accessed via DB2 as a large object data type (BLOB, CLOB, and XML).
Why Compress Data?
Many software vendors have implemented compression for storage and sometimes performance reasons. Multimedia files often involve large data that can be a problem to store and render. Familiar examples of multimedia compression include JPEG images, mp3 audio files, and mp4 video files. However, these compression schemes often "lose" information and therefore degrade the quality, a tradeoff that the industry has found acceptable.
For a different example, since the release of Office 2007, Microsoft has used the Office Open XML document format, which includes zipping and placing multiple XML files within a single XLSX, DOCX, PPTX, etc. file. In other words, Microsoft uses compression as a means of storing large XML documents in a smaller space. Unlike multimedia compression, data loss isn't acceptable with business documents, so a "lossless" algorithm is used to compress the data. Zlib is one such lossless compression algorithm, meaning that after being compressed and uncompressed, the data is restored to its exact original form.
Similarly, DB2 can take advantage of compression to reduce file size and perhaps in some cases, improve performance. The aim of this article presents two user-defined DB2 for i functions called DEFLATE and INFLATE that are built to "crunch" and "uncrunch" large data columns and variables.
Examples of how to use them in an embedded SQL environment are shown here (where host variable CompressXML is a BLOB and XmlDoc is a CLOB/XML):
-- Compress XML document from variable XmlDoc
Exec SQL
SELECT DEFLATE(XMLPARSE(DOCUMENT :XmlDoc))
INTO :CompressedXML
FROM SYSIBM.SYSDUMMY1;
-- Decompress XML document
EXEC SQL
SET :XmlDoc=INFLATE_XML(:CompressedXML);
Which Compression Algorithm Do I Use?
There are many compression algorithms available on the Internet. On the IBM i, I figured C/C++ would perform better than Java, so I sought a C/C++ compression library. For this demonstration, I chose an open-source project called miniz, a relatively small C/C++ library that implements "the zlib and Deflate compressed data format specification standards."
My decision to use miniz was based on free availability, compression speed, ability to handle in-memory data, source code size, format popularity, and support/compatibility with other languages/platforms (including .NET and Java). Most of all, my choice for miniz was based on the ease of understanding the C/C++ code (which I'm not very good at) that I would need to integrate with DB2. Out of all the free code I came across on the Internet, miniz was relatively simple for a non C/C++ guy like myself.
Attached here is the C++ code for service program mzlibsql.cpp I'm not a C++ programmer by trade, so as usual, I hobbled together my code using examples found on the web. (Feel free to forward any suggested improvements.)
To give DB2 compression capability, I had to combine two concepts: processing in-memory data streams in chunks (found in the miniz code Example #3) and implementing DB2 large object (LOB) locator parameter support. The primary reason for using a locator is memory management when compressing a wide range of data sizes. There are only two public methods meant for use by DB2 in this code: zlib_inflate and zlib_deflate.
The instructions for creating the service program from the attached source code may be summarized as follows:
- Download the miniz source code and extract the miniz.c source file and place it in your C/C++ source folder on the IFS (although the sample compile statement uses the IFS, it can be alternatively placed in a source member). For this project, I downloaded the latest version available: v1.14.
- Download the attached source file mzlibsql.cpp and place it in the same IFS folder as miniz.
- Create the C++ module (CRTCPPMOD) and service program (CRTSRVPGM). Don't forget to replace your source folder or member names.
Once the service program has been created, the next step is to tell DB2 for i how to use the C++ service program by creating external user-defined function wrappers. These CREATE FUNCTION SQL statements are in the source header. (For an introduction to writing external user-defined functions using RPG, see Reuse Your RPG Code with SQL User-Defined Functions. The concepts are applicable to C and C++.
In the sample code, I've included six CREATE FUNCTION definitions. Three of them are called DEFLATE, and they accept a BLOB, CLOB, or XML parameter. The remaining three are called INFLATE (for BLOBs), INFLATE_CLOB, and INFLATE_XML. Use the DEFLATE function regardless of the LOB data type. However, use the correct "INFLATE" function to get the data back in its original format. All these functions use the same C++ code; the only difference among them is how DB2 interprets the data it's exchanging with the function.
Note that DBCS (DBCLOB and NCLOB) data is supported by the DEFLATE function because DB2 can CAST these types to binary. However, because DB2 does not support the CAST of binary data back to DBCS (without writing a helper function), the ability to INFLATE double-byte data is currently unsupported.
I tested this code on a machine running IBM i 7.1, but it should also run on V5R4 and IBM i 6.1 (except for the XML versions of the functions).
How Well Does the Compression Work?
With the compression level set to 10 (max compression, slowest speed), for a simple example, 32000 spaces are compressed to just 54 bytes:
-- Returns 54
SELECT LENGTH(DEFLATE(CAST(SPACE(32000) AS CLOB(32K))))
FROM SYSIBM.SYSDUMMY1;
Since you're not likely to squish a bunch of blanks in production, I saved a long MC Press article (Rev Up "i" Reporting with SQL Server 2008 Reporting Services) as an HTML file on the IFS to the tune of 249KB. The compressed file size is only about 35K and the statement took about ½ second to complete.
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
-- Demo requires IBM i 7.1 (GET_BLOB_FROM__FILE function)
WITH ZLIB_DEMO AS (
SELECT QSYS2.GET_BLOB_FROM_FILE('/tmp/Rev Up i Reporting.htm') AS HTML_DATA
FROM SYSIBM.SYSDUMMY1
)
-- Returns 254666 and 35792
SELECT LENGTH(HTML_DATA), LENGTH(DEFLATE(HTML_DATA))
FROM ZLIB_DEMO
;
For another example, I had a DB2 group PTF binary file sitting on the IFS in the /tmp folder. This file is about ½GB. If I had to store this kind of file in an application, I could crunch it and save about 30% in space, although it took 6.75 minutes on my virtualized test machine to deflate it (again the compression level was set to 10, which is max compression but minimal speed):
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
-- Demo requires IBM i 7.1 (GET_BLOB_FROM__FILE function)
WITH ZLIB_DEMO AS (
SELECT QSYS2.GET_BLOB_FROM_FILE('/tmp/SF99701_1.bin') AS BIN_DATA
FROM SYSIBM.SYSDUMMY1
)
-- Returned 540252160 and 382534049 in 6.75 minutes
SELECT LENGTH(BIN_DATA), LENGTH(DEFLATE(BIN_DATA))
FROM ZLIB_DEMO
So, how much "bang for your buck" comes from compression depends on the data itself. Text and XML usually yield a sizable storage savings. Binary data is difficult to predict. Other document formats, such as PDFs, already store some of their elements, such as images, compressed (using zlib compression oddly enough), so large PDFs may not compress very well if their size is due to image content.
Notes for Application Development
When using compression in applications, keep in mind the following:
- Compression doesn't work well with modern media formats (e.g., jpeg images, mp3 audio, and mp4 video) because the data is already compressed.
- Because compressed data isn't easy to sift through, key information from the data should be saved in standard database table columns. If you're crunching XML, it often makes sense to extract important info (customer number, customer job,
quote id, etc.) that can be saved as normal columnar data to aid the search and retrieval of the compressed data. - Don't save needless information just because it will now fit! As always, be careful when deciding what an application will store.
- Beware the CPU overhead of compressing and decompressing files. If your CPU is currently taxed with the existing application load, compression may put it over the edge. Also, you may need to establish a minimum size threshold (say 1MB for XML) before deciding saving space with compression is worth the effort. It's also worth considering whether to leave current data uncompressed while compressing historical data.
- When working with very large objects, remember that memory consumption becomes a major concern. Often, using LOB locators (similar to pointers) in your functions and procedures can potentially save DB2 from creating unnecessary copies of the data.
- Don't make the mistake of dumping a zipped LOB to an IFS file with a zip extension and then expecting the file to work with your favorite desktop zip tool. While the file data is compressed, it's just a memory stream and doesn't include all the file information that is required to work with a normal zip utility.
- Review the miniz notes because there are several suggestions on how to tune the functions for performance, depending on the size and type of data being compressed. For example, look for the variable called "level." This is the compression level, and it accepts a value from 0 (max speed) to 10 (max compression.) In the sample code, this value is set to 6, which is a balance between compression speed and size optimization. Adjust this value as needed for your application. When I reset the compression "level" in code to be a 5 (balance of speed vs. compression), the ½GB compression time was reduced to about 2 ½ minutes, and the resulting file wasn't significantly bigger than when using level 10 (max compression).
- Manipulating large object values requires an active transaction. This can be a problem when working with legacy tables that are not journaled. One way to overcome this problem is to specify the "WITH NC" hint on the SQL statement:
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
INSERT INTO SQL.COMPRESS_DEMO
VALUES (1,1,'/tmp/Rev Up i Reporting.htm',SQL.DEFLATE(QSYS2.GET_BLOB_FROM_FILE('/tmp/Rev Up i Reporting.htm')))
WITH NC; -- Allows an insert without active journaling
In the past, many applications have been limited because it was not feasible to record large amounts of information. Compressing data (especially text) allows you to design applications that record and retrieve large amounts of data. Implementing compression functionality in your database is similar to what many other vendors have already done. This tip has demonstrated how a data compression algorithm can be used to reduce the storage requirements for LOB data.
LATEST COMMENTS
MC Press Online