14
Thu, Nov
2 New Articles

Using Multiple Member Files with ODBC

APIs
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Open Database Connectivity (ODBC) is an industry standard that IBM is actively promoting. With a properly configured ODBC driver, you can access data in AS/400 database files from programs on your PC. Some popular programs used with ODBC include Microsoft Access, Excel, and Word; the Microsoft Query application; languages such as C, Visual Basic, and Delphi; and the Database Access GUI application included with Client Access/400.

You can use any of those programs with the Client Access/400 ODBC driver. For the most part, configuring the CA/400 ODBC driver is trouble free, and you can usually start retrieving data from the AS/400 in a matter of minutes.

Unless the data that you need is in a multiple member AS/400 database file.

 

The Problem with Multiple Members

 

ODBC is not an IBM product but, rather, an implementation of an industry standard. ODBC is intended to present a common API to end-user programs on the PC. If a program knows how to interact with that common API, the program can work with different databases, on machines other than the AS/400, by simply using an ODBC driver specifically for the other database. Because of this design, there is no provision for the AS/400-specific, multiple-member database files. ODBC responds to SQL statements, which do not have any syntax to specify a file member.

This means that, when you use the CA/400 ODBC driver with an AS/400 database file, the driver works with the first member in the file. (The first member is the earliest chronological member in the file, not the first member alphabetically.) If you need to work with a different member, you need to take steps outside of the driver to identify the member.


If you don’t use multiple member files, you don’t need to be aware of the techniques that I describe here. For example, if, instead of using multiple members for similar but distinct data sets, you create the set of database files in another library, you can simply identify that library in your ODBC requests. The SQL syntax used with ODBC supports specifying the library (called a “collection”) where your database files are located.

 

The ODBC API Solution

 

The ODBC User’s Guide (page 33-4) documents a technique you can use to pass CL commands to the server job in which the ODBC request runs. Those CL commands can perform file overrides, which are then in effect for the duration of the server job. That means that, when the ODBC driver opens the database files, the members that you selected in the overrides are used.

Figure 1 shows the sample code used to perform the override. The sample is shown in the C language. The first statement, SQLAllocStmt, is an ODBC API function call that allocates a statement handle. The statement handle is used to identify to the ODBC driver another statement that you will construct. That other statement will contain the OVRDBF command.

After the SQLAllocStmt, there are two C statements, strcpy and strcat. Those statements create the OVRDBF command, which is to be run using the QCMDEXC OS/400 system program. (The strcpy is similar to a CL CHGVAR or an RPG MOVEL operation. The strcat is similar to the CL or RPG CAT operation.)

After the statement is constructed, the SQLExecDirect ODBC API function call passes the call to QCMDEXC through the ODBC driver to the AS/400. The intention with these four lines of code is that you run these first in your program, then you continue with whatever other ODBC functions you need to perform your database access.

Although this is a fine technique, there are some questions that must be answered. First, why is it necessary to use these statements? Why not just run the OVRDBF command from a CL program that you call, or from a command entry display? Second, what if you’re using an application, such as the Database Access GUI, that doesn’t let you directly work with the ODBC API functions?

The first question, why you need to use such statements, is easy to understand. The job that the ODBC driver is running in is a unique job to the AS/400. As such, it can only use overrides that apply to that job. The sequence of events leading up to the ODBC driver job includes starting the router, which starts a job in subsystem QCMN, then starting an ODBC driver connection to the AS/400, which starts a job in subsystem QSERVER. If you start a workstation session to have access to the command entry display, that job is started in subsystem QINTER. Even if you submitted a CL program with the overrides to QSERVER, that would be an entirely separate job from the ODBC connection job.

The only way you can get overrides to the ODBC job is to supply them within that job. When you’re working in a programming language such as C or Visual Basic, you can control all aspects of the ODBC job. You control when the ODBC driver connects to the AS/400, when it disconnects, and all of the statements that you want to pass to the driver during the connection. When you have that type of control, it is relatively easy, as shown in Figure 1, to apply an OVRDBF for the database files that you’re going to access.

So you can see that it certainly is possible to use multiple-member files with ODBC. However, you’re probably interested in using ODBC with products that don’t let you have API-level access to the ODBC driver. Most of the query products that work with


ODBC don’t allow lower level access; you buy the product so that you don’t have to work at that level.

 

Alternatives to the ODBC API

 

If you’re using ODBC with a query package, chances are that you do not have access to the ODBC API from directly within the package. What you need is a way to select the member you want to query, either from selections you make within the query product, or through actions that you take outside of the query.

Some of the alternatives that are available include the following:
‱ Create single-member logical files based on the physical file member that you want to query.

‱ Use an exit-point program to perform file overrides.
‱ Perform manual preparation of work files to run the query against.
‱ Use an ODBC driver that supports member selection. I’ll review each of these options.

 

Single-member Logical Files

 

Probably the simplest high-level technique is to create a logical file. It is important that the logical file be a single-member logical file, as your query against the logical file is subject to the same problem as accessing a multiple-member physical file.

Figure 2 shows a sample of the DDS needed to create a logical file for this purpose, and the Create Logical File (CRTLF) CL command used to create the single- member logical over a multiple member physical file. The logical file doesn’t need any key fields defined in it, or select/omit statements. On the other hand, if you know that you will be accessing data from the logical in a certain sequence, you might want to investigate the possible performance improvement that results from having an access path in the required sequence. Also, if you want to restrict access to data within the logical file rather than within the query, you can include the select/omit logic.

When you use a query application, you simply select the logical file instead of the physical file as the file to be queried. Because the logical file is associated with a specific member, rather than the first member in the physical file, you will have access to the data you need.

The problem with this approach is the possible proliferation of logical files, which necessitates an easy-to-use naming convention for the logical files. You might be able to solve the naming convention by creating a separate library for each group of logical files. For example, if you are keeping data for multiple companies in separate physical file members, you can create a library for each company, and create the logical files in that library. Another approach would be to add a unique prefix or suffix to the logical file names to group them together. The prefix approach is probably preferable, as that would let you easily locate sets of related logical files in file selection lists in the query products.

 

Using an Exit Point

 

I mentioned above that ODBC queries run as jobs in the QSERVER subsystem, and pointed out the problem of trying to supply overrides to that job from another job. However, by using an exit point in the Registration Information facility, you have access to the job in which the query is running, before the query starts. (See “Servers, Subsystems, and the Registration Facility,” PC Support Expert, Jan/Feb 1995, and “How to Create an Exit Program,” Client Access/400 Expert, Nov/Dec 1995, for additional information about the Registration Information facility and exit programs.)


The ODBC User’s Guide (page 36-12) and Server Concepts document the exit point within the Registration Information facility that you’ll need. The name of the exit point is QIBM_QZDA_SQL1. You can see where that exit point is defined by running the Work with Registration Information (WRKREGINF) command, and paging up a few screens.

The documentation for the SQL1 exit point indicates that it is called when any one of a dozen SQL functions are used. Figure 3 shows the information that is available to your exit program when it is called through one of the functions. The functions that invoke the exit program are shown in the “Requested Function” section of Figure 3.

Using a test program, I saw that the SQL1 exit point program was called three times when starting a query with the Database Access GUI (included with CA/400 for Windows 3.1). The program was called for functions x’180F’ (Create package), x’1803’ (Prepare and describe), and x’180E’ (Open and fetch), in that order. When called

for function x’1803’, the first 512 bytes of the SQL statement were available.

To use this technique, your exit point program, which is called for the functions I mentioned, issues the OVRDBF commands for the members that you need. In my testing, I issued the OVRDBF commands using the QCMDEXC API, so that the OVRDBF commands took effect at the same invocation level.

The big problem with this technique is that there is very little in the exit program parameters to let you know which files are being used, and which members you should override to. You might possibly be able to issue overrides based on the user profile name, if a certain user profile is always associated with certain file members. Otherwise, you might have to parse the SQL statement text, and determine from that which members need to be selected. There is no easy way to pass in a member name that you want selected, unless you can manage to include that as part of the SQL statement.

 

Manual Preparation

 

You may be able to manually prepare work files that the query will be performed against. This is similar to creating single-member logical files, in that you get the data you need into a single-member work file. The difference is that the work files do not need to be logical files, so you incur no additional overhead with logical file maintenance.

To use this technique, you need some way of copying the data from the multiple member file to the single-member work file. This can be as simple as a Copy File (CPYF) CL command that you run before starting your ODBC query. This technique can work well if you can batch your preparation work, such as at the beginning of the day, or if your users also have an interactive emulation session, and run the preparation step before starting the query.

Needless to say, this isn’t very elegant, but it will work.

 

Use Another ODBC driver

 

There is at least one ODBC driver that I am aware of that supports member selection. The ShowCase ODBC driver, from ShowCase corporation, provides a nonstandard ODBC extension to allow you to specify the member name. I have not been able to try this option when using a query product, so I don’t know for certain if a program like the Database Access GUI can take advantage of this extension. However, if your database makes extensive use of multiple-member files, you might want to investigate this option.

 

No Good Way

 


Apparently, OS/400 is one of the very few databases, if not the only one, that has the construct of members in files. The ODBC specification does not include any options to deal with this. Unfortunately, IBM either didn’t have or exercise sufficient influence during the ODBC specification process, which is surprising given the assertion that DB2/400 is one of the most widely used SQL databases.

True, a construct like multiple members has no SQL equivalent. But that’s a pretty weak excuse when you’re faced with using ODBC drivers and client/ server query packages to work with your legacy data.

 

References

 

Client Access Windows 3.1 Client for OS/400 ODBC User’s Guide (SC41-3533) OS/400 Server Concepts and Administration V3R1 (SC41-3740)

SQLAllocStmt (hdbc, &hstmt);
strcpy(stmt, “call qsys.qcmdexc(‘OVRDBF FILE(TESTER) TOFILE(JMBLIB/”);
strcat(stmt, “TESTER) MBR(NO2) OVRSCOPE(*JOB)’, 0000000065.00000)”);
SQLExecDirect(hstmt, stmt, SQL_NTS);

(From Client Access/400 Windows 3.1 Client for OS/400 ODBC User’s Guide, SC41-3533.)

Figure 1: ODBC API Level Code to Perform an Override

0001.00 A* FILE FSEVDTL - LOGICAL OVER FS400/FSEVDT(FSEVDTL)
0002.00 A R EVDT PFILE(FSEVDT)

CRTLF FILE(TESTODBC/FSEVDTL)

SRCFILE(TESTODBC/QDDSSRC)
SRCMBR(FSEVDTL)
MBR(FSEVDTL)
DTAMBRS((FS400/FSEVDT (FSEVDTL)))

Figure 2: Sample DDS and CRTLF Command

Field Type Description User Profile Name CHAR(10) Name of user profile calling server Server Identifier CHAR(10) *SQLSRV
Format Name CHAR(8) ZDAQ0100 Requested Function BINARY(4) Function being performed

Example:

x’1803’ - Prepare and describe

x’180E’ - Open and fetch

x’180F’ - Create package

Statement Name CHAR(18) Name of the statement used for prepare or execute functions Cursor Name CHAR(18) Name of the cursor used for the open function Prepare Option CHAR(2) Option used for the prepare function Open Attributes CHAR(2) Option used for the open function Extended Dynamic CHAR(10) Name of the extended dynamic


Package Name SQL package Package Library Name CHAR(10) Name of the library for extended dynamic SQL package DRDA Indicator BINARY(2) 0 - connected to local RDB

1 - connected to remote RDB

Commitment Control CHAR(1)

Level ‘A’ - commit *ALL

‘C’ - commit *CHANGE ‘N’ - commit *NONE ‘S’ - commit cursor stability SQL Statement CHAR(512) First 512 bytes of the SQL statement (From Client Access/400 Windows 3.1 Client for OS/400 ODBC User’s Guide, SC41-3533.)

Figure 3: Parameters Available at Exit Point QIBM_QZDA_SQL1


BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: