When I first started programming, I was the quintessential whiz-bang programmer. I rang every new bell and blew every new whistle, and I managed to shoehorn every new feature into whatever application I was writing, whether it needed it or not.
This was bad enough in the days of assembly language or C, when you were pretty much limited to a single language, or even early versions of RPG, when there were only so many ways of doing things. But nowadays, technology is bolting ahead by leaps and bounds, and it's hard to tell which new tools will help and which are hype, especially when what works for me may not necessarily work for you. Not only that, but some technologies just don't work well together and require a little extra coddling to co-exist.
In this article, I'll outline some fundamentals for identifying whether a technology is a good strategic fit for your company, and then I'll show you a couple of case studies to illustrate why certain technologies may not fit without some assistance.
The Latest Is not Necessarily the Greatest
Just because something is new does not mean that it is improved. We've seen many situations where that has been made clear. To identify whether a technology fits your organization, you need to address several key factors:
- Is this technology stable?
- Is this technology compatible with the rest of my environment?
- Does this technology make me more dependent or less?
These questions don't address the issue of whether the technology helps you achieve your business goals and is cost-effective for your particular environment; I consider these issues to be business-oriented, and this article is focused entirely on the technology.
Is This Technology Stable?
This is a tough question, and one that you may not always get correct, but you must at least attempt to make the right call. In my mind, calling a technology "stable" requires that it meet a lot of criteria, not all of which are obvious at first glance.
For example, is the technology going to continue to be supported? Back in the day, that meant determining whether the vendor would still be around in five years. These days, the concept is more elusive, especially when dealing with open-source technologies. In the short-attention-span world of public consensus, today's Cadillac is tomorrow's Edsel.
Technologies die out for different reasons. Two typical reasons are over- and under-engineering. Examples include Enterprise JavaBeans (EJBs) and Struts. EJBs are already being recognized as a solution without a problem, especially in business applications. The overhead associated with EJBs and their lack of essential business capabilities make them unsuitable for all but the strictest Java-centric environments. Struts, on the other hand, is an example of an under-engineered idea. Sort of an "alpha" release of a really neat idea, Struts is now all but orphaned even by its creator and is well on its way to being eclipsed by the newer JavaServer Faces (JSF) standard.
Other issues associated with stability include whether a given technology is an accepted standard, whether the community that maintains it has a good vision, and whether the technology actually performs acceptably under your production workload.
Is This Technology Compatible with the Rest of My Environment?
This is not just a question of whether a given technology will actually work. It's also a question of whether you need the technology, both short- and long-term. An example would be Wireless Application Protocol/Wireless Markup Language (WAP/WML). While there is nothing intrinsically wrong with the WAP/WML protocol, it's really not necessary for most people. Unless you need to use a specific wireless device that happens to conform to the WAP standard, then WAP is simply not required. If the idea is to access your Web site from a cell phone, then standard HTML is probably fine, especially as the capabilities of cell phones continue to progress. I'd be willing to bet that there are companies that over-invested in WAP/WML because it looked like a good idea at the time, but in the long run it really wasn't needed--it wasn't compatible with their strategic direction.
Does This Technology Make Me More Dependent or Less?
This is the issue I get the most pushback on these days, and there's a valid reason for that pushback. My question boils down to this: Once you start using this technology, are you then tied to it? More importantly, are you "black box dependent" upon it? By "black box dependent," I mean are you no longer capable of actually debugging the code you are putting into production? I liken many of the newer tools, especially the Web-enabling middleware, as electronic ignition for cars. Prior to the introduction of the computer, a backyard mechanic could usually diagnose a car's problem and fix it without too much trouble. As cars became more computerized, with more sensors and electronics, it became all but impossible for someone with a screwdriver and a timing light to figure out what was wrong. Instead, only the computer knows what's going on and why the engine is behaving poorly. Today, the first thing that happens at the repair shop is the mechanic hooks the car up to an OnBoard Diagnostic (OBD) reader, which basically just displays the diagnostic codes that identify whatever the computer thinks is the problem. Your options are basically limited to replacing whatever the computer tells you to replace.
We're fast approaching just that level of automation with some of the products and technologies being handed to us. Some, like Struts, simply provide another layer on top of something we could already do ourselves. While an easier syntax is not necessarily a bad thing, if you are locked into that specific syntax and it doesn't provide what you need somewhere down the road, you've painted yourself into a corner. This is especially true for those orphaned technologies that are no longer actively supported.
Another area of programming automation comes from tools like WDSc. While I think I've been pretty clear on how much I love WDSc, it's also crucial to recognize that some of its features can be misused. The wizards in particular are prone to abuse. It's one thing to use a wizard to generate some code that I can look at, learn from, and adapt for use in my environment. It's another thing entirely to depend on the wizard for production-quality code. And I fear that more and more we're starting to see IT departments do just that: use a wizard to generate a prototype and then shove that prototype directly into production. And that raises some very red flags in my opinion. An example might help show why.
Case Study: The Web Service Wizard--Good or Evil?
Okay, that's a bit of an overstatement of the issue. The wizard itself is neither good nor evil, but it can certainly be misused. I had a real-life situation in which I was guilty of misusing the wizard, so I can attest to the potential danger of the approach.
Black Box or Black Hole?
In this case, I wanted to expose a service to one of my clients. This service allowed them to send me an XML file, which I would then process to generate and send back various objects, including some nicely formatted JSP files. From an initial review, this looked like an absolutely perfect use of a Web Service. The client would simply attach to a Web Service on my machine and transmit the XML file, and I would then return the objects appropriately encapsulated in another XML file. Nothing to it!
At first, this seemed fine. Although I ran into a few issues during the initial prototyping, most of them could be attributed to my own inexperience in developing Web Services. The WDSc wizards worked pretty much flawlessly, allowing me to generate a "HelloWorld" Web Service, which I was subsequently able to expand into a more full-featured utility. After some initial testing, I was ready to go to market, so to speak.
Unfortunately, that didn't happen. When you package a Web Service for deployment, you simply send some descriptive information to your consumer, who then uses that to build a Web Services client. All of this is under the covers, and very little is really exposed to the programmer. And while this can be seen as a good thing, it has one drawback: It's a black box. And when something goes wrong in a black box, there's very little anyone can do to fix it. In this case, a problem with firewalls arose. But since the person in charge of firewalls didn't understand Web Services, and since I only had the black box capabilities of the generated code, there was precious little that could be done to resolve the problem.
I did begin the research required to identify the problem, and I'll tell you about that in next month's "Weaving WebSphere" column. As a little preview, I'll let you in on an absolutely fantastic tool I found for analyzing TCP/IP traffic. The tool is Ethereal, and it's a sterling example of an open-source project: lots of people working together to create a great, free product. It's almost too good to be true, actually; I'd love to know how the project is funded and how the original author, Gerald Combs, supports himself. Regardless, it's the best TCP/IP analysis tool I've seen out there, including commercial products, and I highly recommend it.
Anyway, I ended up having to go back to the tried-and-true method of using a servlet. My client now sends me the XML via a servlet, and I send back all the generated objects in a zip file. That particular logic was quite cool, actually; it involved making use of the fairly recent javax.zip packages, as well as one or two other open-source packages. All in all, the project was successful, but the Web Service portion of it was a dismal failure. Why the difference? Well, primarily because I could play with the open-source packages and see how they worked, while the Web Service stuff was completely hidden beneath the generated code from the wizard. It is this level of black boxing that I believe to be generally detrimental to our industry.
Same Story, Different Day
Think about it: If all you know as a programmer is how to attach a bunch of black boxes together, and you're fundamentally incapable of either diagnosing what's wrong with those boxes or adapting them to changing business conditions, then you really aren't providing any value add to your department. If software is written by wizards, and those wizards are as readily available in Beijing as in Boston, then there's no reason not to ship that work offshore.
I don't mind wizards as ways to learn how a given task is done. But the code generated by a wizard is almost always far inferior to anything that can be handcrafted by a good programmer. It's that craftsmanship that differentiates an IT professional from a code jockey. And yet, these days I often hear (sometimes from people I respect in the industry!) that they need more WYSIWYG tools and code generators to do the work for them. But as soon as companies start accepting the output of generators, then there's no need for programmers, no way for ground-breaking software ideas to be business differentiators, and eventually no impetus for innovation.
And once that happens, the industry flies offshore faster than you can blink.
Case Study: RPG Source Code in the IFS
This one has me scratching my head. While I know Java source and HTML source and all the other "cross-platform" types of source all live best in a stream file environment such as the IFS, I still don't see that as a good reason to move my RPG, CL, and DDS source to the IFS along with it (and don't get me started on using SQL rather than DDS to define files, please, or we'll be here all day).
This is simply another case when the technology change doesn't have any value add. Look at what you lose: line-by-line source change dates, PDM-based access, source change date on objects.... I can't see what the benefits are here, but maybe you can. If you disagree, please share your views in the forum. I'd really like to know what the benefits are.
In the meantime, though, I do know that there are some tools out there that allow you to do some cool things with source files that are stored in the IFS. The most powerful tools for text file manipulation are in UNIX, and many of them are available under the QShell environment. I believe you can also access the IFS from a Linux PC, although it may require the Samba file mapping utility. Personally, though, my favorite way is to map a drive and then use the UNIX emulation package Cygwin to access the IFS. Cygwin is a very hefty but very powerful set of UNIX utilities ported to run under Windows. It is not small, though, so be warned (a full installation can be hundreds of megabytes).
The Final Word
Technology is good, change is good, productivity aids are good. But as in all things, moderation is the key, and forethought is required before adopting new technologies. Take the time to assess whether a given technology is actually going to help your company in the long run. Remember, if you let your tools dictate your design, your design is unlikely to provide you an advantage.
Joe Pluta is the founder and chief architect of Pluta Brothers Design, Inc. He has been working in the field since the late 1970s and has made a career of extending the IBM midrange, starting back in the days of the IBM System/3. Joe has used WebSphere extensively, especially as the base for PSC/400, the only product that can move your legacy systems to the Web using simple green-screen commands. Joe is also the author of E-Deployment: The Fastest Path to the Web and Eclipse: Step by Step. You can reach him at
LATEST COMMENTS
MC Press Online