How is an organization or an ISV supposed to keep up in these challenging times?
The IBM Power System marketplace is in decline.
Some in the IBM i user community petition and plead with IBM to "market the IBM i server" so that it is acceptable within their organizations. However, the level of IBM marketing is not the real problem.
Most IBM i users got involved with the platform simply because their organizations purchased an ISV solution that executed on it (e.g., BPCS, MAPICS). No matter how much they love the IBM i hardware and OS now, the marketplace was built by ISV solutions, not by the wonderful iSeries hardware/OS combination.
At its zenith, there were more than 20,000 ISV solutions that executed on iSeries hardware. These ISV solutions, not the hardware or OS, are what really made the AS/400 marketplace.
The real problem now is that the IBM i ISV solution providers are struggling to provide the types of solutions that customers want to use today. ISVs can make this transition but are continually being led astray, squandering their scant resources on ill-conceived "modernization" projects.
Then and Now
Then: The ISV solutions made the IBM i marketplace thrive in the 1980s and 1990s. Their solutions were the main driver of demand for the IBM i hardware/OS combination.
Now: ISV activity, or rather the lack of it, impacts the IBM i marketplace in a negative way. A significant reason for this is that the cost of and access to the skills required to migrate/change/modernize/rewrite/whatever you want to call it from what they had then to what they need now is prohibitive.
A brief and eclectic review of the sort of skills that an ISV used in the 1980/90s, versus those required today, may make the preceding point clearer:
Requirements, Solution Prototyping, Expectation, and Project Management
Then: Using a variety of quite crude tools, even homemade ones, some organizations performed all of these critical project startup tasks well. Some did not. Those that did were usually more successful.
Now: Despite having a variety of very sophisticated tools at their disposal, many organizations still fail to grasp that upfront investments made in requirements gathering, solution prototyping, expectation management and project management pay big dividends down the line.
To me, this largely explains why the failure rate for IT projects remains unchanged. IT people like to blame technology for failure, but these are the areas in which the roots of a failure are usually found.
Architecture, Abstraction, Modularity
Then: People who "got it" created modular and abstract application interfaces. They were guided by crude tools, their own experiences, and the relative simplicity of their needs. Those that didn't get it created unmaintainable and inflexible monolithic applications and interfaces.
Now: Despite better tools and design principles like SOA, lots of people still don't get it and continue to create these unmaintainable and inflexible monolithic applications and interfaces, even when the "code" is class-based, is highly modular, and uses Web service interfaces.
The misunderstanding persists: good architecture, abstraction, and modularity are just design principles. These principles aren't something that involvement with an acronym (like SOA) or a technology (like .NET) can teach, impose, or enforce in an organization.
User Interfaces
Then: It was 5250. IBM's Common User Access (CUA) guidelines provided some degree of industry-wide standardization to minimize user-training costs.
Now: The user interfaces are now primarily Web browser and Windows rich-client-based, with some mobile and PDA-type interfaces thrown in.
Windows rich-client interfaces are continually pressured by the Microsoft Office look and feel, which ceaselessly moves the user expectation bar around. Users on either side of 30-40 years of age have markedly different expectations in this area, presenting UI design teams with special challenges (e.g., a 50-year-old user may find a high-function Windows interface hard to learn and use, while a 25-year-old user may complain loudly about the lack of full drag-and-drop functionality).
In Web and mobile applications, pretty much anything goes regarding design standards. Web browser interfaces often present special challenges to IT developers, who are prone to overestimate their graphic design skills. But acquiring skills in this area is costly and requires significant amounts of time and experience.
ISVs continue to fall foul of the fundamental Web browser versus Windows rich-client choice, confusing the economics of zero deployment with usability and often failing to match the performance, usability, and integration expectations of their customers.
Platforms
Then: It runs on an AS/400 (Silverlake). What's your next question?
Now: An ISV has to ensure that the applications and solutions run on the platforms that users are working with. Most IBM i ISVs love the system, but the reality is that not every user they encounter will understand and appreciate why. The multi-platform requirement, more than anything else, is what has made the following areas so much more costly and complex to deal with.
Coding and Implementation
Then: With a little RPG, DDS, CL, and a bit of DBMS relational design, you could build an application.
Now: In various flavors and combinations, people use RPG, DDS, SQL, CL, .NET, C#, C, C++, VB.NET, Java, HTTP, HTML/DHTML, JavaScript, JScript.NET, AJAX, ANT, ANTS, JSON, XML, SOAP, RUBY, GRAILS, PHP, etc., etc., etc. There seems to be a new and "hot" acronym every few months. Which of these acronyms you need to get involved in depends upon your design choices, but all of them are complex and error-prone. They have very high maintenance, enhancement, and testing costs.
Organizations sometimes choose a coding language (say C#) because they will be able to access a pool of coding resources. To me, this is badly missing the point. I would much rather retrain someone who knows my business than go out and shake a resources tree, hoping that something good will fall out. Re-skilling a good known resource is always a better risk than employing a brand new resource.
Performance
Then: It was just about DBMS access, CPU cycles, and 5250 transmissions loads over twinaxial cables and dialup or leased SNA lines.
Now: DBMS access and CPU cycles still count on the backend server, but now you need to add in a myriad of other factors that vary enormously from customer to customer. For example:
• DBMS performance variations (e.g., DB2, SQL Server, and Oracle)
• OS performance differences (e.g., XP and Vista)
• Server and server tier sizes, loading, and performance
• LAN and WAN bandwidth
• Firewall, proxy, wireless, and VPN bottlenecks
• ISP performance and bandwidth
• Data-transfer volumes
• CPU power, memory, and disk space available on client PCs
• The use of desktop browser plug-ins, virus checkers, other desktop applications, etc.
Of course, all the above mean that this area is buck-passing heaven.
Testing
Then: Repeatable, manually performed 5250 scripts ruled the day, maybe with a regression/replay tool thrown in if you could afford it.
Now: Regression/repeat tools are commonly used in an attempt to deal with ever-increasing complexity. Applications with Web browser interfaces need to be tested with a combination of browsers and browser versions. Applications with Windows rich-client interfaces need to be tested across a variety of Windows OS versions (e.g., Server 2003, Server 2008, XP, and Vista) with a myriad of different configurations and sometimes with different databases and database versions (e.g., SQL Server 2003, SQL Server 2008, and Oracle 8.0). The skills required to design and implement effective testing regimes are rare, and the cost of testing has skyrocketed since the late 1990s.
Deployment, Configuration, Support
Then: You had to learn two IBM i commands: Save Library (SAVLIB) and Restore Library (RSTLIB). When a user called for support, you asked for the job log. It usually pointed you straight to the problem area. What was there to configure? All the AS/400s were more or less predictably the same.
Now: The number of technologies and configurations mentioned in the preceding topic areas make it clear that really effective deployment, configuration, and support skills are rare and expensive. Few application failures now produce anything as simple, concise, and easy-to-find as a job log. Sometimes, issues take weeks just to reproduce. It is not uncommon for an issue to happen only on a particular device (usually the CEO's or CFO's laptop).
The root problem here is that every user has a different configuration in areas like DBMS, security, server tiering, networking, and bandwidth. Customers quite reasonably expect that the ISV's application will work in their particular configuration. From the deployment, configuration, and support perspectives, this customer expectation is a cost and skills black hole.
Maintenance, Enhancement, and Future Directions
Then: After five years or so, the weight of compulsory maintenance, brought about by business-level change, usually consumed most of the available budget. This severely restricted the ISV's ability to enhance applications, keep up with competitors, and most importantly, match the expectations of new business.
Now: My impression is that this problem still exists and has been exacerbated by the rate at which technology and expectations change. All industries have their own "hot" topics and buzzwords. The special challenge for an ISV is to keep up with ever-changing buzz, both in IT and in their own business area (e.g., in Financial Services, a customer CEO wants STP, and his IT department wants SOA).
Unfortunately, ISVs are still busy skilling up in complex technologies and then imbedding them directly into their applications. When the five-year problem inevitably catches them, it will be with a vengeance because of this failure to insulate their application from what is now an arcane technology. This will drag them to an enhancement standstill.
Nobody Has a Brain Big Enough Anymore
Then: Most ISVs usually had someone who understood it all. She or he understood the business, the users and their expectations, and the technology and had some vision of where their world would be in two or three years. Because there was a singular vision and the world was simpler, it was not hard to keep everybody in the organization focused on the objective: the delivery of business value.
Now: In every area we've discussed, the IT world is at least an order of magnitude more complex than it was 10 years ago. This is the nub of the IBM i-ISV dilemma. It's no longer possible for a single person to understand it all. Nobody has a brain that big, nor does he or she have the amount of time required to accumulate and maintain the business and the IT knowledge necessary to keep up. If one person can't do it, then you have to have multiple people and teams, with the resulting cost increases, management problems, and differing visions.
Another significant issue for an ISV is the distraction level imposed on the organization by ever-changing technologies. Every technology involves a basic learning curve, even if only to be able to assess its business value. These ceaseless learning curves distract an organization from delivering real business value.
Undoubtedly, IT technologies improve and sometimes revolutionize industry segments, but unless approached and assessed at the right level, they may overwhelm and completely defocus the delivery of business value. For example, it's not uncommon for programmers to choose a technology because it looks good on their resume.
What Might Be Done?
One solution is to just keep increasing the number of people employed to design, implement, test, and maintain the application, with off-shoring being an option to reduce the cost of doing this. Most of the problems associated with this solution are obvious. Some are not so obvious. For example, off-shoring may create a whole new raft of challenges in the requirements, prototyping, expectation, and project management areas. These types of problems are particularly hard for small-to-medium-sized organizations, with their limited budgets, to address.
Another solution is to stop thinking at the "hand-cranking" and low-level technology levels and to look to tools and frameworks to assist in the prototyping, managing, designing, implementing, and testing of applications. These types of tools are about doing more and better with less.
An ISV, with access to good industry knowledge or domain expertise, that delivers the right solution at the right time can still make a fortune and build an empire.
Taking into account how complex the IT world is today, they (and probably you) should consider using application-generation tools and prebuilt frameworks to get started rapidly and correctly. Such tools also help to shoulder the maintenance, enhancement, and testing burden that will inevitably be incurred.
Tools also reduce the time to market, often providing a staged evolutionary path rather than a big-bang revolutionary approach to change. This is critical for an ISV that needs to generate a consistent revenue stream. This is also why so many big-bang hand-cranked application modernization projects fail: the revenue from the existing product range completely dries up before the modernized version ever gets to market.
Above all, these tools provide the essential breathing space that is needed to keep up with competitors and continue to win new business.
LATEST COMMENTS
MC Press Online