18
Thu, Jul
2 New Articles

Power Up SQL Performance with QSQPRCED

SQL
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

One of the key decisions for implementing an SQL application on the AS/400 is deciding which SQL interface to use. Each of them has its own strengths and weaknesses, and, in fact, the ideal application may come from using several. For example, a client/server application may use Java Database Connectivity (JDBC) in addition to invoking stored procedures implemented in static SQL. One of the most frequently overlooked choices in the programmer's arsenal is the QSQPRCED API, which is part of the base operating system support (i.e., you don't have to have SQL/400 to use it) and provides a direct interface to the same support for extended dynamic SQL that ODBC and JDBC drivers use. QSQPRCED can be a very powerful tool for exploiting the flexibility of dynamic SQL without sacrificing the performance advantages of static SQL. In this article, I'll provide a brief overview of where extended dynamic SQL fits into the system structure and how to take advantage of it using the QSQPRCED API.

About SQL Modes

In order to execute SQL statements, the SQL engine needs a standardized set of information; the first piece is the SQL statement itself. The SQL engine also needs environmental attributes, statistics on how the statement has been used previously, and the access plan, which reflects the query optimizer's decisions on the best way to access the underlying data. This information is required for every SQL statement, but it is stored in different places depending on which SQL mode is being used. Figure 1 shows the basic structure, with the native database on the left and the SQL support on the right. Note that the three modes of SQL shown are the internal modes used by the operating system rather than specific SQL interfaces such as Call Level Interface (CLI) or interactive SQL, which ultimately map to these modes. As Figure 1 indicates, all of the SQL modes use the same query optimizer and share the same underlying DB2 UDB with native non-SQL database access.

In static SQL, the statement is generated at program compilation and is stored as part of the program object. This provides some advantages. First, because more processing is done at compilation, static SQL generally provides the best runtime performance. Second, because the information is stored in a permanent program object, it is available to any user of the program across the system. That is, each new user or job can take


advantage of the information generated without any additional processing on his part. The main disadvantage of static SQL is that all statements must be defined when the program is written. This can be very cumbersome for applications that use a large variety of SQL statements and downright impossible for applications that need to generate SQL statements at runtime.

Dynamic SQL is almost the mirror opposite of static SQL. It provides the advantage of complete flexibility in that the SQL statements can be generated at runtime and all the statement information is built when the application prepares the statement. Because it offers better mechanisms to manage resources that are not likely to be used again, dynamic SQL is the ideal choice for applications in which statements are not reused or are specific to the job or user. For environments where statements are reused, however, the disadvantage is that dynamic SQL lacks some of the performance benefits of static SQL. First, there is more runtime processing required to generate the statement information. Also, because the information is stored in temporary structures that are scoped to individual jobs, there is less opportunity for additional jobs to reuse information that has been built previously. Third, because the statement information is temporary, there is less performance leverage to be gained from the runtime statistics describing how the statement has been used before. Operating system changes have addressed these disadvantages to some extent, but they don't make dynamic SQL performance equivalent to that of static SQL. For example, V4R2 and later provide support for the System Wide Statement Cache (SWSC), which provides an internal repository for dynamic access plans. This can help considerably, but you should remember that the access plan isn't the only thing you need to execute a statement.

Extended dynamic SQL is a hybrid of static SQL and dynamic SQL and combines the best features of each for environments where statements are reused across jobs and users. As with dynamic SQL, statement information is generated at runtime when the statement is prepared. But instead of being stored in a temporary job structure, the statement information is stored in a permanent SQL package (*SQLPKG) object. This type of storage provides several advantages. First, Prepare performance is improved because much of the statement information is already in the SQL package. Subsequent Prepares of the same statement run much faster than the first Prepare. Also, because SQL packages are shared repositories (information is shared rather than being stored in multiple copies across multiple jobs), their use can reduce main storage and I/O requirements for the application. Finally, the statistical information gathered for each statement provides more performance leverage, especially with regard to full opens. For dynamic SQL, each job starts from scratch and takes a few executions to get a good statistical record. With extended dynamic SQL, this information is retained across job terminations and even IPLs because SQL packages are permanent objects. The information isn't destroyed until the SQL package itself is explicitly deleted.

The significant disadvantage of using extended dynamic SQL is that it is not the optimal choice for applications in which statements tend not to be reused. As indicated before, dynamic SQL provides greater flexibility because it does not needlessly retain information that will not be reused. Extended dynamic SQL retains all the information in SQL packages that are limited to approximately 500 MB each (as of V4R3). This sounds big but can still be a limiting factor for some applications, especially those in which access plans are frequently updated at runtime. The best application implementation may come from mixing SQL modes. The choice between dynamic SQL and extended dynamic SQL depends on whether statements will be reused.

QSQPRCED Interface Basics

Now that I've covered when you may want to use extended dynamic SQL, I'll get into using QSQPRCED to implement an extended dynamic SQL application. It seems complicated, but, once you've built your first QSQPRCED program, it's easy to write


more. For the purposes of this article, I'll assume you're already familiar with SQL and highlight QSQPRCED's features, using some easy examples.

QSQPRCED takes five parameters: SQLCA, SQLDA, Function template format, Function template, and Error code. I'll cover them in the order in which they're specified in the interface.

The the SQL communications area (SQLCA) parameter is a data structure containing a set of variables that are updated after every execution of an SQL statement. Although it contains fairly granular information, the most important fields are SQLCODE and SQLSTATE. You can use either one to determine the success of an SQL statement execution.

SQLCODE is defined as an integer. A value of 0 indicates that the statement executed successfully without any warnings or errors.

A positive value indicates that the statement executed successfully but with a warning condition. For example, an SQLCODE of 100 on a fetch operation indicates that no record was returned. A negative value indicates that an error was encountered for the statement. For example, a return code of -501 indicates that a cursor was not open. The same SQL codes and states are used across all SQL modes. You can view all of the SQLCODE values on the AS/400 using the Display Message Descriptions (DSPMSGD) command to view the message descriptions in the QSQLMSG message file. The two examples just mentioned would appear as SQL0100 and SQL0501, respectively. The SQLCODE values are also documented in DB2 for AS/400 SQL Programming.

SQLSTATE is a five-character field that also reports status. In the fetch example I gave, a value of 02000 would indicate that no record was returned. In the case of a cursor not being opened, the SQLSTATE value would be 24501. The SQLSTATE values are also documented in DB2 for AS/400 SQL Programming.

For information about the other fields in the SQLCA, please refer to Appendix B of DB2 for AS/400 SQL Reference.

The SQL descriptor area (SQLDA) parameter is a set of variables that describes the host variables for a given SQL statement. It can be the output of a Prepare or Describe operation. It is also the input for Open, Fetch, and Execute operations. In the simplest case, the SQLDA contains the names of the columns, the values indicating what types of variables the SQLDA represents, the lengths of the associated data, and the pointers to the program variables for the data. The SQLDA provides many other variations to support alternative column-naming and large object (LOB) support in V4R4. For more information, please refer to Appendix C of DB2 for AS/400 SQL Reference.

Function Template Format

The Function template format parameter is a character field that indicates which template is being used as input to QSQPRCED, which supports four different functional templates. The contents of each template format are fixed so applications don't need to be recompiled as new functions are added to the API. Although there are interesting and useful options in the various templates, I will focus on the simplest, which is designated as SQLP0100. For more information on the four templates and their contents, please see OS/400 File APIs V4R4.

Function Template

Function template is the trickiest parameter but isn't nearly as complicated as it might seem at first. You'll see in the example programs that you can use the same template over and


SQLCA

SQLDA

over again and that the changes based on the specific function are minor. I'll start with the parameters that are required on every call to the API.

First, Function code is a one-character value indicating the operation to be performed. The values and their corresponding basic operations are as follows:

• 1-Create SQL package
• 2-Prepare a statement
• 3-Execute a statement
• 4-Open a cursor
• 5-Fetch from a cursor
• 6-Close a cursor

Note that these operations follow a logical sequence in that you have to create the SQL package first and prepare a suitable statement in the package before you can execute it or open a cursor that uses it. QSQPRCED supports additional operations for more specialized applications that I won't cover here.

Next, the SQL package name and Library name are a pair of 10-character fields in which you specify the name and library for the SQL package being used. There are a few easy tricks that apply. First, just to keep things organized, you should use a library that makes sense for your application. This is probably the library containing the master data or application programs. Second, because many of the advantages of SQL packages come from their being permanent objects, it generally doesn't make sense to put them in the QTEMP library.

Finally, the Main program name and Library name are also a pair of 10-character fields; they identify the program to be used for scoping SQL resources. This is generally the top, or root, program for your application. When it leaves the stack, the SQL environment and all of its resources, such as open cursors, are deleted. An alternative is to specify *ENDACTGRP, which indicates that the SQL resources won't be released until the program's activation group terminates.

The next set of values I'll cover is environment values that must be specified when an SQL package is created. Note that these values are bound at SQL package creation and are ignored on subsequent API calls. QSQPRCED applications do not apply job attributes at runtime. Because these template values are ignored on subsequent calls, the easiest programming style is to set them once at the start of the program and ignore them thereafter.

COMMITMENT CONTROL is a one-character field indicating the commitment control level to be used. The possible values and their corresponding control levels are as follows:

• N-*NONE
• C-*CHANGE
• S-*CS
• A-*ALL

NAMING is a three-character value indicating the convention to be used for resolving unqualified table references. 'SYS' indicates that the library list should be used, and 'SQL' indicates that the library corresponding to the user profile name should be used.

The DATE FORMAT, DATE SEPARATOR, TIME FORMAT, TIME SEPARATOR, and DECIMAL POINT fields are fairly self-explanatory. Although they are set at the system or job level in most environments, they must be explicitly provided for QSQPRCED. For the purposes of my example, I'll use 'YMD', a slash ('/'), 'HMS', a colon (':'), and a period ('.'), respectively. The remaining values in the template all relate to specific functions and will be explained within the example programs in the next section.


To help application programmer productivity, the AS/400 ships System Openness Includes as an optional installable part of the base operating system. If you have this option installed, you have access to source files in library QSYSINC, containing the declares for the Function Template for QSQPRCED. I'll talk about this in more detail in the section containing the sample application program.

Error Code

Unlike SQLCODE and SQLSTATE in the SQLCA, which provide information about the SQL layer, the Error code data structure returns status information from the API itself.

For example, if an invalid parameter value is specified on the API, the Error code data structure determines how the error is reported to the application. The first field in the data structure is an integer indicating how many bytes are available for error information. A value of 0 indicates that any error found by the API should be reported as an escape exception. Because this class of errors consists entirely of application programming errors, setting this value to 0 is generally recommended for real applications.

QSQPRCED Program Example

Shown in Figure 2, RPG program CAFEPGM1 is a simple, hypothetical example of an application used by a coffee shop. The application needs to insert prices into the price table and retrieve a price based on the item name. The code is very "nuts and bolts"; making it pretty would all depend on the user interfaces. Note that no errors are tolerated; they are just reported. Follow the code as you read the following paragraphs. If you have the System Openness Includes installed, take a look at member QSQPRCED in the source file QRPGLESRC in library QSYSINC. Note that, in addition to using more cryptic names, these shipped data structures simply aren't built to accommodate potentially large, variable- length values. In the case of QSQPRCED, the statement text length is variable and can be up to 32K-1 bytes. That's why the example program declares its own QSQPRCED template based on the one shipped in QSYSINC rather than using the one in QSYSINC directly. If you're running with just base operating system support, the same is true of SQLDA. However, if you have the product DB2 UDB for AS/400 Query Manager and SQL Development Kit installed, you can take advantage of shipped SQLDA templates. For more information, see DB2 for AS/400 SQL Reference.

All of the calls to QSQPRCED are done from subroutine CALLAPI, which calls the API and generates a message if an error occurs. Because I'm going to use QSQPRCED, the first step is to create the SQL package. As you can see in the RPG source, all of the names and environmental values are set as part of initialization.

Running this program will create an SQL package named CAFEPKG1 in library CAFEDATA. Note that I've specified CAFEPGM1 as the main program because it is the root program of the application. As a matter of programming style, the function is always set just before the subroutine call to avoid losing track of what operation is about to run.

Once the package has been created, I can prepare a statement to insert a row of data into the table. The statement name is required for Prepare, Execute, Open, and Describe. (The statement itself is specified only in Prepare.) Because it varies, you must also set the statement length. The final new field contains the Open options. It is a one-character field with the first four bits indicating whether the statement is enabled for read, write, update, and delete, respectively. This value is required for every Prepare and Open but is most relevant to SELECT statements. For other statement types, the statement itself overrides these options implicitly. That's why, in this example, the value is set to a hex value of F0, which, theoretically, would enable the INSERT statement to read, update, and delete in addition to the very appropriate write option. Unless you're working with a SELECT statement, you don't need to be careful with this field. After preparing the statement, the program executes it.


The application gets much easier as you go along. My example was intentionally unrealistic in that it used literal values to populate the PRICES table. In order to insert values based on the program variables, I need to use the SQLDA, which describes parameter markers in the SQL statement. Take a look at the second Prepare for a statement, which has parameter markers. Note that the only change to Prepare processing is that I now have question marks (?) in the statement text.

Now I'll execute the statement a couple of times. The big change here involves setting up the SQLDA. The program must set both SQLD and SQLN to indicate the number of entries contained in the SQLDA-one per parameter marker value. SQLDABC must also be set to the length of the SQLDA: 80 characters per entry plus 16 bytes for the header. For each of the parameter markers, I had to set the type, the length, and a pointer to the variable containing the value to be used on this operation. Note that, for a packed decimal, you must set the length value with the length in the first byte and its precision in the second byte. Note that I didn't need to set anything but the program variable before doing the second INSERT.

Take a look at the longer example for a SELECT statement that is used to open a Cursor, Fetch, and Close. In this case, I'll use the SQLDA data structure to identify inputs on Open and to identify output variables for Fetch. Note that I only had to add two new fields to make this work. The first is the cursor name, which is required on any cursor- oriented operation, such as Open, Fetch, or Close. The second new field is the blocking factor, which must be specified for Open and Fetch operations. As you can see, once you've got a program pulled together, it is easy to replicate or extend.

Additional Tips

Here are some additional tips to help you build more robust, scalable applications. First, because much of QSQPRCED applications' power comes from their being permanent objects, you should set them up to create the SQL package and tolerate the SQLCODE value of -601 if the SQL package already exists. It generally doesn't make sense to build an application in which the SQL package is deleted after every run. Next, because SQL packages are limited in size, it makes sense for a large application to use several different SQL packages for different parts of the application. If you ever want to look at what statements are in an SQL package, you can use the Print SQL Information (PRTSQLINF) command, which summarizes the package attributes, the statements in it, and their access plans. Finally, if your application needs to handle remote database access and your system is on V4R4, you may also want to consider the QxdaProcessExtDynEDRS API, which provides similar support across remote connections.

The simple examples in this article should be enough to get you started on writing your own QSQPRCED-based applications. If you want to learn more about the AS/400 SQL engine, I'd also recommend attending the IBM Learning Services course DB2 UDB for AS/400 SQL & Query Performance Tuning and Monitoring Workshop, course code S6140. For environments where statements are reused, QSQPRCED is a good option that enables your application to leverage the power of dynamic SQL without sacrificing the performance advantages of static SQL.

REFERENCES AND RELATED MATERIALS

• DB2 for AS/400 SQL Programming (SC41-5611-03, CD-ROM QB3AQ803)
• DB2 for AS/400 SQL Reference (SC41-5612-03, CD-ROM QB3AQ903)
• DB2 UDB for AS/400 Education Web site: www.as400.ibm.com/db2/db2educ_m.htm
• OS/400 File APIs V4R4 (SC41-5857-03, CD-ROM QB3AMI03)


SQL Query Optimizer

DB2 UDB

(Data Storage and Management)

Native

Record- oriented I/O

Static

Prepare at compile

Save data in permanent program object Dynamic

Prepare at runtime

Save data in temporary job structure Extended

Dynamic

Prepare at runtime

Save data in permanent SQL package

Figure 1: External SQL interfaces are supported by three internal modes.

D* Define the SQLDA

D SQL_NUM C CONST(2)

D SQLDA DS

D SQLDAID 1 8A

D SQLDABC 9 12B 0

D SQLN 13 14B 0

D SQLD 15 16B 0

D SQL_VAR 80A DIM(SQL_NUM)

D 17 18B 0

D 19 20B 0

D 21 32A

D 33 48*

D 49 64*

D 65 66B 0

D 67 96A

D SQLVAR DS

D SQLTYPE 1 2B 0

D SQLLEN 3 4B 0

D SQLRES 5 16A

D SQLDATA 17 32*

D SQLIND 33 48*

D SQLNAMELEN 49 50B 0

D SQLNAME 51 80A

D*

D* Get standard templates for QSQPRCED (includes SQLCA)

D/COPY QSYSINC/QRPGLESRC,QSQPRCED

D*

D* Get Error Code structure

D/COPY QSYSINC/QRPGLESRC,QUSEC

D*

D* SQLP0100 function template for QSQPRCED API

D FuncTemplate DS

D Function 1 1

D PkgName 2 11 inz('CAFEPKG1')

D PkgLib 12 21 inz('CAFEDATA')

D MainPgm 22 31 inz('CAFEPGM1')

D MainPgmLib 32 41 inz('CAFEDATA')

D StmtName 42 59

D CursorName 60 77

D OpenOpts 78 78

D DescClause 79 79

D CommitCtl 80 80 inz('N')

D DateFormat 81 83 inz('YMD')

D DateSep 84 84 inz('/')

D TimeFormat 85 87 inz('HMS')

D TimeSep 88 88 inz(':')

D Naming 89 91 inz('SYS')

D DecimalPt 92 92 inz('.')

D BlockFactor 93 94B 0

D StmtLength 95 96B 0

D Statement 500

D*

D* Function template format value for QSQPRCED

D Format S 8 inz('SQLP0100')


D*

D* Local variables

D ErrorText S 50

D ItemName S 30

D ItemPrice S 11p 2

C* Initialize needed fields in SQLDA

C eval SQLN = 0

C eval SQLD = 0

C eval SQLDABC = (SQLN * 80) + 16

C* Initialize needed fields in QUSEC

C eval QUSBPRV = 0

C* * QSQPRCED will signal exceptions for interface errors

C*

C* Create the SQL package

C eval Function = '1'

C exsr CallAPI

C************ Prepare/Execute for a simple INSERT ******************

C eval StmtName = 'AddPrice1'

C eval Statement =

C 'INSERT INTO CAFEDATA/PRICES ' +

C ' VALUES(''Single Latte'',1.75)'

C eval StmtLength = %LEN(%TRIMR(Statement))

C eval OpenOpts = x'F0'

C eval Function = '2'

C exsr CallAPI

C eval Function = '3'

C exsr CallAPI

C*

C*********** Prepare/Execute for an INSERT using parameter markers ********

C eval StmtName = 'AddPrice2'

C eval OpenOpts = x'F0'

C eval Statement =

C 'INSERT INTO CAFEDATA/PRICES ' +

C ' VALUES(?,?)'

C eval StmtLength = %LEN(%TRIMR(Statement))

C eval Function = '2'

C exsr CallAPI

C* It's prepared, now insert

C eval SQLN = 2

C eval SQLD = 2

C eval SQLDABC = (SQLN * 80) + 16

C eval SQLTYPE = 452

C* * fixed length character

C eval ItemName = 'Double Latte'

C eval SQLLEN = %len(ItemName )

C eval SQLDATA = %addr(ItemName )

c eval SQL_VAR(1) = SQLVAR

C eval SQLTYPE = 484

C* * packed

C eval ItemPrice = 2.75

C eval SQLLEN = 11 * 256 + 2

C* * length in first 2 bytes, precision in second 2 bytes

C eval SQLDATA = %addr(ItemPrice )

c eval SQL_VAR(2) = SQLVAR

C eval Function = '3'

C exsr CallAPI

C*

C* Execute the INSERT statement again with different values

C eval ItemName = 'Espresso'

C eval ItemPrice = 1.25

C eval Function = '3'

C exsr CallAPI

C*

C* Prepare/Open/Fetch/Close for SELECT using parameter markers

C*

C* Prepare the SELECT statement

C eval StmtName = 'PriceQuery1'

C eval OpenOpts = x'80'

C* * Read only

C eval Statement =

C 'SELECT PRICE FROM CAFEDATA/PRICES ' +

C ' WHERE ITEM = ? '

C eval StmtLength = %LEN(%TRIMR(Statement))

C eval Function = '2'

C exsr CallAPI

C*

C* Open a cursor for the SELECT statement

C eval CursorName = 'PriceCursor1'

C eval SQLN = 1

C eval SQLD = 1

C eval SQLDABC = (SQLN * 80) + 16

C eval SQLTYPE = 452

C eval ItemName = 'Double Latte'

C eval SQLLEN = %len(ItemName )

C eval SQLDATA = %addr(ItemName )

C eval SQL_VAR(1) = SQLVAR

C eval BlockFactor = 1

C eval Function = '4'

C exsr CallAPI

C*


C* Fetch data from the cursor

C eval SQLN = 1

C eval SQLD = 1

C eval SQLDABC = (SQLN * 80) + 16

C eval SQLTYPE = 484

C eval SQLLEN = 11 * 256 + 2

C eval SQLDATA = %addr(ItemPrice )

c eval SQL_VAR(1) = SQLVAR

C eval Function = '5'

C exsr CallAPI

C*

C* Close the cursor

C eval CursorName = 'PriceCursor1'

C eval Function = '6'

C exsr CallAPI

C*

C eval *inlr = '1'

C return

C********* Subroutine to invoke QSQPRCED

C CallAPI begsr

C call QSQPRCED

C parm QSQS

C parm SQLDA

C parm Format

C parm FuncTemplate

C parm QUSEC

C QSQLCODE ifne 0

C eval ErrorText = 'Function ' + Function +

C ' error, Stmt: ' + Statement

C ErrorText dsply

C eval *inlr = '1'

C return

C endif

C endsr

Figure 2: Program CAFEPGM1 uses QSQPRCED to create an SQL package, populate a table, and retrieve data.


BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: