21
Sat, Dec
3 New Articles

Advances in XML

Web Languages
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

XML has become the standard for exchanging complex data, and i is quickly becoming the standard platform for processing it.

 

Passing business data from one machine to another has always been one of the more difficult issues, but one of the most common standards today is XML. XML has a number of features that make it very powerful for data interchange but that don't mesh with the fixed records and fields of traditional i languages. However, each release of i provides new capabilities that make it easy for i programmers to work in the XML world. In this article, we'll examine some of the features that have been added in recent releases, including Version 6.1.

 

In earlier times, it was all about size. Whether it was storing data on expensive disk drives or sending it over slow communication lines, space was at a premium. The easiest way to minimize space requirements was to format the data in flat files with predefined layouts. Numeric data in particular was stored in binary format, and the data itself--whether stored on disk or transmitted over the wire--contained no information about the data: no metadata, as such information came to be known.

Removing the Ties That Bind

This sort of hard-coded data definition has a number of benefits, not only in storage space, but also in processing speed. If you can define a buffer of contiguous fields in your program and simply overlay data read from disk or from a communication line directly into that buffer, then obviously that's the best performance you can achieve. If, on the other hand, you have to parse the incoming data and validate each field before being able to use it in your program, your processing requirements obviously increase, in some cases significantly.

 

However, a significant downside to such tight coupling exists: Everybody has to agree beforehand on the storage format. And while such agreement is easy to achieve when one RPG program calls another RPG program on the same machine, it is a little more difficult to come by when one side is a Java application or a .NET program and the other is RPG on the i. It's often quite a feat to convert data from one programming environment to data understood by another. As an example, packed decimal fields, a mainstay in all i languages, do not exist in most other environments. In fact, one of the most important tools in the IBM Java Toolkit is a set of classes that translate between fundamental RPG and Java data types (including data structures).

 

So as long as we insist on transmitting data between layers in a binary format, there will be data translation issues. What is needed instead is a standard platform-independent data representation. And while they are wildly divergent in both origin and use, the two constructs that best fit that definition today are SQL result sets and XML documents.

 

I won't spend a whole lot of time on result sets here. The only reason I bring them up is because, when they are appropriate, they can reduce a lot of the problems associated with binding, and anybody looking to connect two layers of an application should always check to see if SQL is a good fit. Their primary advantages include embedded metadata and very standard and well-known protocols for accessing the data. On the negative side, they are rather bulky. Also, results sets require an active connection to the database and so can't be persisted (saved to disk); thus, they don't lend themselves well to asynchronous tasks. (Note: There are ways around these difficulties, including RowSets in Java and other techniques, but that's not the focus of this article.)

 

Results sets grew out of the concept of an RDBMS, so they're closely linked to the concept of database tables: specifically, retrieving rows of data from those tables. The tables are defined, so really the idea of an external definition of the data isn't a high priority. The inclusion of metadata in the result set makes it a little easier to handle ad hoc requests, but generally the data contained within a result set represents rows of fields from an existing table.

 

With XML, it's a little different. The data in an XML document often has no relationship to existing persistent data. XML is close in spirit to Electronic Data Interchange (EDI) documents, which are intended to be standardized messages sent from one computer to another. The typical process would be for one computer to fill in the fields of an EDI document from fields in its database and then transmit the EDI document to another computer, which would then apply that data to its own, often quite different, database.

 

This is where the similarities end. EDI documents are typically fixed-length flat files, with records having varying layouts depending on the record type, usually found in the first few characters of the record. Older formats are entirely fixed, while the newer ANSI standards allow delimited records (often fields are delimited by asterisks and records by tildes). XML documents store data in a tagged format. Let's compare a simple example:

 

An EDI segment containing name and address information might look like this:

 

N1*Pluta Brothers Design, Inc.*542 E. Cunningham Dr.~

If you'll notice, the data in the EDI document is positional and delimited. I'm not an EDI expert, so I don't know what happens when the data contains the escape character. If, for example, the business name has an asterisk in it, I don't know how you represent it in EDI.

 

XML is a tagged representation. The same information in an XML document might look like this:

 

<COMPANY>

  <NAME>Pluta Brothers Design, Inc.</NAME>

  <ADDRESS1>542 E. Cunningham Dr.</ADDRESS1>

</COMPANY>

XML has specific methods for handling escaped data and in fact is built from the ground up to support advanced data formatting techniques such as Unicode. In fact, there's almost nothing you can't store in an XML document. Not only that, but XML documents also have external definitions that can be used to verify the data in the document prior to sending it to someone else. While it can't check for logical data errors, such as invalid customer numbers, it can check to make sure that a customer number has been included if required and even do basic syntax checking on the data. This editing was originally done through something called a Document Type Definition (DTD), which has since been superseded by the XML Schema Document (XSD) or "schema" for short. Here is one possible schema for my XML example above:

 

<xs:element name="company">

 <xs:complexType>

  <xs:sequence>

   <xs:element name="name" type="xs:string"/>

   <xs:element name="address1" type="xs:string"/>

  </xs:sequence>

 </xs:complexType>

</xs:element>

Note I said "one possible schema." That's because there are many, many ways to specify data in an XML document. You can specify attributes on the individual data fields, you can group multiple fields into complex types, you can define the sequence of fields and whether they are optional, and so on.

How Does That Relate to i?

OK, that's all fine and good. XML lends itself to formatting and transmitting data from one machine to another by removing the positional requirements of flat files or delimited records. Data is stored in tagged files, which also provide standardized editing and validation. However, the further we progress down this road, the further away we get from standard processing using standard fixed-length structures prevalent in standard i languages such as RPG and COBOL.

 

Let's face it; neither language is particularly adept at string handling. I've written XML parsers and generators in RPG, and while the %trim BIF makes generators pretty simple, parsers are no walk in the park even if you take a number of shortcuts and remove some of the weirder capabilities (such as sending binary data in a tagged field). Until recently, the only other option was to use a parser written in another language. Awhile ago, Scott Klement wrote an interface to the Expat XML Parser, which is an open-source parser written in C. Scott went through the trouble of porting that parser to the i and then writing interface layers in both C and RPG. Another option is to use one of the powerful Java parsers, such as Xerces or Sun's own JAXB parsers. Either of these options obviously requires some level of Java knowledge. But you really didn't have a lot of other options.

 

Over the past several releases, this has changed significantly. While COBOL had the initial advantage, I think the fact that RPG has support for both SAX and DOM parsing gives it a leg up. I won't go into a long dissertation about DOM vs. SAX processing; the short version is that with a SAX-type parser, you don't call the parser to look for data in the document. Instead, the parser runs through the document and calls you for each event (such as data or a beginning or ending tag). With a DOM parser, you can look for only those elements you need, whereas with a SAX parser, you need to at least acknowledge all the possible events and ignore the ones you don't want. DOM is more flexible, while SAX is typically faster.

V5R3

V5R3 saw the first foray into HLL support for XML; in this case, the XML PARSE statement was added to COBOL. XML PARSE is an event-driven parser similar to the SAX-type parsers of JAXB and Expat. You define the name of a procedure that will be invoked for each event generated while the XML document is parsed, and the information about each event is passed in several special registers (XML-CODE, XML-EVENT, XML-TEXT, and XML-NTEXT).

 

You code a simple XML PARSE statement, which directs processing to your handler procedure. That procedure is usually a set of evaluate/when constructs, with ultimately one branch for each field in the document. One of the biggest issues I have with SAX-based parsers is that they require a different coding philosophy (or, at the least, some redundant trapping) to handle the following cases:

 

<LINE>

  <ITEM>ABC1234</ITEM>

  <QUANTITY>12</QUANTITY>

</LINE>

<LINE ITEM="ABC1234" QUANTITY="12" />

The first case is a fully tagged example; every child element gets its own tag. The second format uses attributes rather than child tags and significantly reduces the amount of redundant data. However, SAX-based parsers handle the attributes during the processing of the parent tag (in this case, the LINE tag). That requires extra code in the LINE tag. And if you wish to support both syntaxes, that code must be duplicated in the ITEM tag.

 

SAX-based parsing can get a bit involved. However, at least the parser is now native to COBOL; that gives COBOL the edge, at least in this release.

V5R4

This release brought XML to RPG in the form of the XML-SAX and XML-INTO statements. The XML-SAX statement provides support similar to that in COBOL, in which a procedure is invoked for each event.

 

XML-SAX %HANDLER(handlerProc : commArea) %XML(xmlDoc : options)

The handlerProc is the address of the procedure invoked, while the commArea parameter identifies a data structure used to communicate between the parser and the handler. The xmldoc parameter can specify either a field containing the XML or a file on the IFS. The options parameter controls many of the other processing options for the command.

XML-INTO supports a more DOM-style processing of the data, in which you have a predefined structure in which you store data from an XML document. Depending on the options you select, this allows you to read only portions of an XML document with a very simple syntax:

 

XML-INTO variable %XML(xmldoc : option)

 

The various components of this statement specify the actual work to be done. The variable is typically a data structure, although it can be a simple variable or an array of variables or structures. This is a powerful command that in many cases can provide everything needed to get data out of an XML document in a way that a standard RPG program can use it.

 

At the same time, COBOL is given the XML GENERATE statement, which allows COBOL programs to generate XML documents. This is increasingly important in the SOA world as Web services become a standard communications technique. Web services typically are made up of an XML request document and an XML response document; programs that wish to participate in the Web services world need to both parse and generate documents.

 

Please note that as far as I can tell, generation of XML in COBOL is limited to fully tagged data. That means that even the simplest records will expand quite a bit. So at this point, COBOL has lost the lead in parsing, but it still has an edge in that it now has generation capabilities, albeit not the most efficient generation.

Finally, 6.1

The problem here has been that even with all the enhancements to the language, RPG has still been dogged by field-size limitations that, while perfectly acceptable in the fixed-length world, caused a lot of problems as we moved into the new stream style of XML-driven code.

 

The ubiquitous 64KB limit was more than enough for most purposes in RPG. Heck, remember that once upon a time one of the leading figures in the IT industry said no computer would ever need more than 640KB of memory total. That being the case, limiting a field to 64KB isn't such a stretch, especially since that's the limit of a 16-bit integer and so allows offsets to be stored as two bytes.

And back in the day of record-based logic, where you read in a single record and processed it, no field needed to be longer than that. It isn't until you start dealing with entire transactions as atomic units that this becomes an issue. Now that programmers are sending entire orders or even batches of orders as a single XML message to be processed, that means that the buffer that holds the XML document needs to be large enough to hold the entire transaction. As RPG matures from a record-based business language to a more general-purpose processing language, that 64KB limitation becomes onerous. The workaround, using a user space, is just that: a workaround. It allows a larger document to be processed, but it's not very elegant.

 

And then finally in Version 6.1 (and this is the correct syntax now; technically, there is no V6R1 of i5/OS; we are now working with IBM i 6.1), IBM has increased the limit for fields to 16MB. This may seem to be a fairly trivial change, but actually it required a lot of work on the part of the compiler teams. You can scan the mailing lists for discussions on how to best represent the VARYING keyword for both backward compatibility and forward movement. In any case, the 16MB limit ought to be enough for most standard documents; anything over 16MB is something that probably needs to be processed in pieces anyway.

 

But then again, where have we heard that before?

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: