22
Sun, Dec
3 New Articles

What's Wrong with Our Code?

Commentary
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times
When an anthropologist writes the next chapter on the evolution of human tool construction, how will the software we are purchasing today be rated? Will our code be hailed as powerful additions to the human tool chest? Certainly the vendors of these products would like us to think so.

Or will the current crop of software--and the underlying technologies that enable it--be dismissed as faulty, defective pieces of flotsam: mechanisms that are hopelessly flawed, tediously error prone, and riddled with innumerable bugs that make them cheaper to replace than to fix?

An Epidemic of Failure or Business as Usual?

Consider the following study conducted by the National Institute of Standards and Technology (NIST). NIST surveyed 10 CAD/CAM/CAE and product data management (PDM) software vendors, and 179 users of these products in the automotive and aerospace industries. It discovered that 60% of users experienced significant software errors in the previous year. Of those who reported errors, the users indicated that there was an average of 40 major and 70 minor software bugs per year in the tools they were using.

That doesn't sound too bad, does it? After all, even Microsoft discovers a bug in its code upon occasion. ;-)

But according to NIST, the total cost impact of these software errors on these manufacturing sectors alone was estimated to be $1.8 billion per year.

NIST conducted a second survey of the Financial Services sector (4 software vendors and 98 software users). This second survey focused on the development and use of Financial Electronic Data Interchange (FEDI) and clearinghouse software, as well as the software embedded in routers and switches that supports electronic data exchange (EDI). Two-thirds of the respondents--mostly major banks and credit unions--indicated that they had experienced major software errors in the previous year. Those respondents that did have major errors reported an average of 40 major and 49 minor software bugs per year in their FEDI or clearinghouse software systems. Approximately 16% of those bugs were attributed to router and switch problems, and 48% were attributed to transaction software problems. The source of the remaining 36% percent of errors was unknown.

The total cost impact on the financial services sector was estimated to be $3.3 billion.

In case after case, industry after industry, the use of buggy software is clearly having a detrimental effect upon the bottom lines of our companies.

59 Billion Dollars Worth of Software Bugs

The NIST study, entitled "The Economic Impacts: Inadequate Infrastructure Software Testing" concludes that the total cost of these software errors to the U.S. economy is an estimated $59.5 billion annually or about 0.6% of the gross domestic product. The focus of the report is directed at the need for a better software testing infrastructure to remove or drastically reduce the number of errors before software products are released into the economy.

But the identification of "the problem" doesn't really address the development of products within software industry. After all, if any software company is given an adequate budget, the easiest thing to do is to discover that something doesn't work properly. The more difficult challenge is to develop a system of technology that prevents software bugs from penetrating the development process to begin with. Object-oriented programming (OOP) was once touted as a means by which such software errors could be eliminated, but that has certainly not become the case. And today it seems that the dynamics and the economics of the software industry are making it hopelessly difficult to address the issue of software quality at all.

Constant Change Negates Quality Control

For instance, a typical software application today is invariably composed of thousands of discrete software modules and components that are interlaced and connected--and that often interact with secondary or tertiary functions embedded in the operating systems of the hardware. Middleware, operating systems routines, Web services, communications protocols, and myriad other technologies all interweave to theoretically provide the user with a transparent and highly functional experience that purports to automate or streamline work.

However, too often these software elements are in a constant state of change--through new software releases, updates, or patches--making the testing process of any particular component or package extremely complex. The failure of a single piece of code anywhere within the stream of commands that pulse through our systems may lead to false information, security leaks, or catastrophic failures that may not even be identified as "bugs" by the user. In other words, what is amazing about our systems today is not that there are so many failing pieces, but that anything works at all!

Yet clearly something is wrong with the scenario by which the industry is developing products. Consider that for any other industry--the automotive or the space industries as examples--a failure rate of 60% in any component would result in a national recall of the product. The software industry of today doesn't--and couldn't--operate in that kind of quality-focused environment. Why? Because the economics won't support it.

Rate of Change: 18 Months and Counting Down

The average turnover of products within the software industry is about 18 months. This means that every year and a half some major component of an information system will be upgraded, swapped out, superceded, or scrapped. This cycle of replacement is actually accelerating as competition within the industry progressively drives software and hardware vendors to the next level of technology. For instance, as companies move toward Internet-based Web services, the invisibility of component change--from a systems administration perspective--will make it impossible for any individual software vendor to thoroughly test and guarantee the quality of any particular product. In the realm of Web services, software updates and new releases will roll out to our companies' software infrastructure with the same alacrity as Internet viruses. Meanwhile, our ability to control or even test these changes will be erased or severely hampered.

Quality: The Industry Penalty

One might conclude that the only real remedy for this dilemma is for the industry itself to establish and maintain rigorous standards for the quality of software. But the computing industry has already debunked the concept of quality if it interferes with the competition for innovation.

Consider the experience of IBM: In 1987, the United States Congress passed Public Law 100-107 called The Malcolm Baldrige National Quality Improvement Act, establishing an award to recognize products of the highest quality developed within the United States economy. In 1990, IBM's Rochester Division won the award for the AS/400 computing system. Yet, instead of becoming a standard for excellence within the industry, the AS/400 was branded and penalized as a "legacy proprietary system."

IBM seems to have learned its lesson and has now turned the argument of standards and quality on its ear: Its marketing strategy is now to equate so-called "open-source" standards--standards designed to publicly identify the specifications by which technologies interact--with the concept of "quality standards."

International Standards Versus Standards of Quality

For instance, last January, while marketing both the e-Server hardware and the WebSphere middleware computing technologies, Bill Zeitler, Senior Vice President and Group Executive of IBM Server Group, claimed that no single organization can now control the momentum of e-business through proprietary hardware or software schemes. His message was aimed at hardware and software competitors who had built proprietary computing systems. In IBM's view, owning a proprietary technology meant that the quality of the product wasn't up to international standards.

Yet, the irony is that to control the quality of any product, a developer must take ownership of the process by which the product is created. This is, of course, Microsoft's argument as it pours R&D dollars into its proprietary .Net technologies. Furthermore, Microsoft says, there are no embedded standards of quality within the open-source movement. As a result, customers won't experience a decrease in the level of errors from these products: Instead, in all likelihood, each release of innovative software based upon open-source standards may actually see a net increase in customer software errors.

Software Development and the Loss of Innocence

Clearly the industry is headed into a new realm as it grapples with the competing dynamics of quality and competitive innovation. Long gone are the days when a small group of individual software developers might control the quality of every element of its products. Gone too are the days when the testing for the quality of a product can reveal all the flaws that a consumer might experience.

Perhaps when the history of this era of tool-making is written, it won't be the tools themselves that stand out as revolutionary. Instead, maybe it will be the processes by which we learn to control the transformation of technology itself. Until such a history is written, however, our companies seemed destined to swim in a sea of technological change, each of us learning to inhale and exhale new advancements in technology without drowning in the flood of technological errors and bugs. And until the software development process has been changed, it seems clear we'll continue to suffer from a surfeit of program bugs that rob us of productivity and steal our precious IT dollars.

Thomas M. Stockwell is the Editor in Chief of MC Press, LLC. He has written extensively about program development, project management, IT management, and IT consulting and has been a frequent contributor to many midrange periodicals. He has authored numerous white papers for iSeries solutions providers. His most recent consulting assignments have been as a Senior Industry Analyst working with IBM on the iSeries, on the mid-market, and specifically on WebSphere brand positioning. He welcomes your comments about this or other articles and can be reached at This email address is being protected from spambots. You need JavaScript enabled to view it..

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: