官术网_书友最值得收藏!

Overview of the Modernization Options

There are five primary approaches to legacy modernization:

  • Re-architecting to a new environment
  • SOA integration and enablement
  • Replatforming through re-hosting and automated migration
  • Replacement with COTS solutions
  • Data Modernization

Other organizations may have different nomenclature for what they call each type of modernization, but any of these options can generally fit into one of these five categories. Each of the options can be carried out in concert with the others, or as a standalone effort. They are not mutually exclusive endeavors. Further, in a large modernization project, multiple approaches are often used for parts of the larger modernization initiative. The right mix of approaches is determined by the business needs driving the modernization, organization's risk tolerance and time constraints, the nature of the source environment and legacy applications. Where the applications no longer meet business needs and require significant changes, re-architecture might be the best way forward. On the other hand, for very large applications that mostly meet the business needs, SOA enablement or re-platforming might be lower risk options.

You will notice that the first thing we talk about in this section — the Legacy Understanding phase — isn't listed as one of the modernization options. It is mentioned at this stage because it is a critical step that is done as a precursor to any option your organization chooses.

Legacy Understanding

Once we have identified our business drivers and the first steps in this process, we must understand what we have before we go ahead and modernize it. Legacy environments are very complex and quite often have little or no current documentation. This introduces a concept of analysis and discovery that is valuable for any modernization technique.

Application Portfolio Analysis (APA)

In order to make use of any modernization approach, the first step an organization must take is to carry out an APA of the current applications and their environment. This process has many names. You may hear terms such as Legacy Understanding, Application Re-learn, or Portfolio Understanding. All these activities provide a clear view of the current state of the computing environment. This process equips the organization with the information that it needs to identify the best areas for modernization. For example, this process can reveal process flows, data flows, how screens interact with transactions and programs, program complexity and maintainability metrics and can even generate pseudocode to re-document candidate business rules. Additionally, the physical repositories that are created as a result of the analysis can be used in the next stages of modernization, be it in SOA enablement, re-architecture, or re-platforming. Efforts are currently underway by the Object Management Group (OMG) to create a standard method to exchange this data between applications. The following screenshot shows the Legacy Portfolio Analysis:

Application Portfolio Analysis (APA)

APA Macroanalysis

The first form of APA analysis is a very high-level abstract view of the application environment. This level of analytics looks at the application in the context of the overall IT organization. Systems information is collected at a very high level. The key here is to understand which applications exist, how they interact, and what the identified value of the desired function is. With this type of analysis, organizations can manage overall modernization strategies and identify key applications that are good candidates for SOA integration, re-architecture, or re-platforming versus a replacement with Commercial Off-the-Shelf (COTS) applications. Data structures, program code, and technical characteristics are not analyzed here.

The following macro-level process flow diagram was automatically generated from Relativity Technologies Modernization Workbench tool. Using this, the user can automatically get a view of the screen flows within a COBOL application. This is used to help identify candidate areas for modernization, areas of complexity, transfer of knowledge, or legacy system documentation. The key thing about these types of reports is that they are dynamic and automatically generated.

APA Macroanalysis

The previous flow diagram illustrates some interesting points about the system that can be understood quickly by the analyst. Remember, this type of diagram is generated automatically, and can provide instant insight into the system with no prior knowledge. For example, we now have some basic information such as:

  • MENSAT1.MENMAP1 is the main driver and is most likely a menu program.
  • There are four called programs.
  • Two programs have database interfaces.

This is a simplistic view, but if you can imagine hundreds of programs in a visual perspective, we can quickly identify clusters of complexity, define potential subsystems, and do much more, all from an automated tool with visual navigation and powerful cross-referencing capabilities. This type of tool can also help to re-document existing legacy assets.

APA Microanalysis

The second type of portfolio analysis is APA microanalysis. This examines applications at the program level. This level of analysis can be used to understand things like program logic or candidate business rules for enablement, or business rule transformation. This process will also reveal things such as code complexity, data exchange schemas, and specific interaction within a screen flow. These are all critical when considering SOA integration, re-architecture, or a re-platforming project.

The following are more models generated from the Relativity Modernization Technologies Workbench tool. The first is a COBOL transaction taken from a COBOL process. We are able to take a low-level view of a business rule slice taken from a COBOL program, and understand how this process flows. The particulars of this flow map diagram are not important; rather, this model can be automatically generated and is dynamic based on the current state of the code.

APA Microanalysis

The second model shows how a COBOL program interacts with a screen conversation. In this example, we are able to look at specific paragraphs within a particular program. We can identify specific CICS transaction and understand which paragraphs (or subroutines) are interacting with the database. The models can be used to further refine our drive for a more re-architected system, help us identify business rules and help us populate a rules engine, which we will see in the later chapters.

APA Microanalysis

This example is just another example of a COBOL program that interacts with screens — shown in gray, and the paragraphs that execute CICS transactions — shown in white. So with these color coded boxes, we can quickly identify paragraphs, screens, databases, and CICS transactions.

Application Portfolio Management (APM)

APA is only a part of IT approach known as Application Portfolio Management. While APA analysis is critical for any modernization project, APM provides guideposts on how to combine the APA results, business assessment of the applications' strategic value and future needs, and IT infrastructure directions to come up with a long term application portfolio strategy and related technology targets to support it. It is often said that you cannot modernize that which you do not know. With APM, you can effectively manage change within an organization, understand the impact of change, and also manage its compliance.

APM is a constant process, be it part of a modernization project or an organization's portfolio management and change control strategy. All applications are in a constant state of change. During any modernization, things are always in a state of flux. In a modernization project, legacy code is changed, new development is done (often in parallel), and data schemas are changed. When looking into APM tool offerings, consider products that can provide facilities to capture these kinds of changes in information and provide an active repository, rather than a static view. Ideally, these tools must adhere to emerging technical standards, like those being pioneered by the OMG.

Re-Architecturing

Re-architecting is based on the concept that all legacy applications contain invaluable business logic and data relevant to the business, and these assets should be leveraged in the new system, rather than throwing it all out to rebuild from scratch. Since the new modern IT environment elevates a lot of this logic above the code using declarative models supported by BPM tools, ESBs, Business Rules engines, Data integration and access solutions, some of the original technical code can be replaced by these middleware tools to achieve greater agility. The following screenshot shows an example of a system after re-architecture.

Re-Architecturing

The previous example shows what a system would look like, from a higher level, after re-architecture. We see that this isn't a simple transformation of one code base to another in a one-to-one format. It is also much more than remediation and refactoring of the legacy code to standard java code. It is a system that fully leverages technologies suited for the required task, for example, leveraging Identity Management for security, business rules for core business, and BPEL for process flow.

Thus, re-architecting focuses on recovering and reassembling the process relevant to business from a legacy application, while eliminating the technology-specific code. Here, we want to capture the value of the business process that is independent of the legacy code base, and move it into a different paradigm. Re-architecting is typically used to handle modernizations that involve changes in architecture, such as the introduction of object orientation and process-driven services.

Re-Architecturing

The advantage that re-architecting has over greenfield development is that re-architecting recognizes that there is information in the application code and surrounding artifacts (example, DDLs, COPYBOOKS, user training manuals) that is useful as a source for the re-architecting process, such as application process interaction, data models, and workflow. Re-architecting will usually go outside the source code of the legacy application to incorporate concepts like workflow and new functionality that were never part of the legacy application. However, it also recognized that this legacy application contains key business rules and processes that need to be harvested and brought forward.

Some of the important considerations for maximizing re-use by extracting business rules from legacy applications as part of a re-architecture project include:

  • Eliminate dead code, environmental specifics, resolve mutually exclusive logic.
  • Identify key input/output data (parameters, screen input, DB and file records, and so on).
  • Keep in mind many rules outside of code (for example, screen flow described in a training manual.
  • Populate a data dictionary specific to application/industry context.
  • Identify and tag rules based on transaction types and key data, policy parameters, key results (output data).
  • Isolate rules into tracking repository.
  • Combine automation and human review to track relationships, eliminate redundancies, classify and consolidate, add annotation.

A parallel method of extracting knowledge from legacy applications uses modeling techniques, often based on UML. This method attempts to mine UML artifacts from the application code and related materials, and then create full-fledged models representing the complete application. Key considerations for mining models include:

  • Convenient code representation helps to quickly filter out technical details.
  • Allow user-selected artifacts to be quickly represented in UML entities.
  • Allow user to add relationships and annotate the objects to assemble more complete UML model.
  • Use external information if possible to refine use cases (screen flows) and activity diagrams — remember that some actors, flows, and so on may not appear in the code.
  • Export to XML-based standard notation to facilitate refinement and forward-re-engineering through UML-based tools.

Modernization with this method leverages the years of investment in the legacy code base, it is much less costly and less risky than starting a new application from ground zero. However, since it does involve change, it does have its risks. As a result, a number of other modernization options have been developed that involve less risk. The next set of modernization option provide a different set of benefits with respect to a fully re-architected SOA environment. The important thing is that these other techniques allow an organization to break the process of reaching the optimal modernization target into a series of phases that lower the overall risk of modernization for an organization.

In the following figure, we can see that re-architecture takes a monolithic legacy system and applies technology and process to deliver a highly adaptable modern architecture.

Re-Architecturing

SOA Integration

Since SOA integration is the least invasive approach to legacy application modernization, this technique allows legacy components to be used as part of an SOA infrastructure very quickly and with little risk. Further, it is often the first step in the larger modernization process. In this method, the source code remains mostly unchanged (we will talk more about that later) and the application is wrapped using SOA components, thus creating services that can be exposed and registered to an SOA management facility on a new platform, but are implemented via the exiting legacy code. The exposed services can then be re-used and combined with the results of other more invasive modernization techniques such as re-architecting. Using SOA integration, an organization can begin to make use of SOA concepts, including the orchestration of services into business processes, leaving the legacy application intact.

Of course, the appropriate interfaces into the legacy application must exist and the code behind these interfaces must perform useful functions in a manner that can be packaged as services. SOA readiness assessment involves analysis of service granularity, exception handling, transaction integrity and reliability requirements, considerations of response time, message sizes, and scalability, issues of end-to-end messaging security, and requirements for services orchestration and SLA management. Following an assessment, any issues discovered need to be rectified before exposing components as services, and appropriate run-time and lifecycle governance policies created and implemented.

It is important to note that there are three tiers where integration can be done: Data, Screen, and Code. So, each of the tiers, based upon the state and structure of the code, can be extended with this technique. As mentioned before, this is often the first step in modernization.

SOA Integration

In this example, we can see that the legacy systems still stay on the legacy platform. Here, we isolate and expose this information as a business service using legacy adapters.

The table below lists important considerations in SOA integration and enablement projects.

Platform Migration

This area encompasses a few different approaches. They all share a common theme of low risk, predictable migration to an open system platform with a high level of automation to manage this process. With platform migrations, the focus is moving from one technology base to another as fast as possible and with as little change as possible. InChapter 10, Introduction to Re-hosting Based Modernization using Oracle Tuxedo, we will focus on moving from mainframe platforms to open systems through a combination of re-hosting applications to a compatible environment maintaining the original application language (usually COBOL), and automated migration of applications to a different language when necessary. Each uses a high level of automation and a relative low level of human interaction as compared to other forms of modernization. The best re-platforming tools in the market are rules-based, and can also support automated changes to business logic or data access code when required to address specific business needs through specifically configured rule sets.

Automated Migration

Automated migration is a technique in which software tools are used to translate one language or database technology to another. It is typically used to protect the investment in business logic and data in cases where the source environment is not readily available or supportable (example skills are rare) on the target platform. Such migrations are only considered automated if the scope of conversion handled by the tools is at least 80 percent. Automated migration is very fast and provides a one-to-one functionally equivalent application. However, the degree of the quality of target code is heavily dependent upon what the source is.

There are two primary factors which determine how good the target application is. The first factor being, what is the source paradigm? If you are coming from a procedure-based programming model such as COBOL, then the resulting Java will not be a well-structured object-oriented code. Many vendors will claim pure OO, or 100 percent compliant Java. But in reality, OO languages programs can still be used in a procedural fashion. When the source is a step-by-step COBOL application, then that is what you will end up with after your migration to Java. This solution works quite well when the paradigm shift is not large. For example, going from PL/I to C/C++ is much more attainable with this strategy than converting COBOLto Java. This strategy is often used to migrate from 4GLs, such as Natural or CA Gen (formerly COOL:Gen) to COBOL or Java. Of the two target environments, migration to Java is more complex and typically requires additional manual re-factoring to produce proper OO POJO components or J2EE EJBs that can be easily maintained in the future.

The second factor one needs to consider is the quality of the source. Some re-factoring can be done on the source language, or the meta-language often generated in the transformation. But these usually only address things such as dead code or GOTO statements, not years of spaghetti code.

If your goal is to quickly move from one technology to another, with functional equivalence, then this is a great solution. If the goal is to make major changes to the architecture and take full advantage of the target language, then this type of method usually does not work.

Re-Hosting

Re-hosting involves moving an application to another hardware platform using a compatible software stack (example COBOL containers and compatible OLTP functionality provided by Oracle Tuxedo) so as to leave the source application untouched. This is most commonly used approach to migrate mainframe COBOL CICS to an open systems platform and has been used in hundreds of projects, some as large as 12,000 MIPS.

The fundamental strength of rehosting is that the code base does not change and thus there are no changes to the core application. There are some adaptations involved for certain interfaces, batch jobs, and non-COBOL artifacts that are not inherently native to the target environment. These are usually handled through automated migration. The beauty of this solution is that the target environment using open systems platform, typically UNIX or Linux, has a significantly lower TCO than the original mainframe environment, allowing customers to save 50 to 80 percent compared to their mainframe operations. The budget savings gained from this move can fund more long term, yet beneficial re-architecture effort.

Re-Hosting

Re-Hosting Based Modernization

Evolving from the core re-hosting approach and leveraging flexible, rules-driven automated conversion tools, this approach goes beyond re-hosting to a functionally-equivalent application. Instead of a pure shift of COBOL code to a target system without any changes to the original code, some of the automated tooling used by Oracle's migration partners to re-host applications and data also enables automated re-engineering and SOA integration during or following migration. For example, Metaware Refine workbench has been used to:

  • Automatically migrate COBOL CICS applications to COBOL Tuxedo applications.
  • Convert PL/I applications running under IMS TM to C/C++ applications under Tuxedo.
  • Identify and remove code duplication and dead code, re-documenting flows and dependencies derived from actual code analysis.
  • Migrate VSAM data and COBOL copybooks describing the data schema to Oracle DB DDLs and automatically change related data access code in the application.
  • Migrate DB2 to Oracle DB, making appropriate adjustments for data type differences, changing exception handling based on differences in return codes, and converting stored procedures from DB2 to Oracle.
  • Perform data cleansing, field extensions, column merges and other data schema changes automatically synchronized across data and data access code.
  • Migrate non-relational data to Oracle DB to provide broader access from applications on distributed systems.
  • Convert 3270/BMS interface to Web UI using JSP/HTML, enabling modifications and flow optimization in original legacy UI.
  • Adapt batch to transactional environment to shorten batch windows.

APA tools for automated business rule discovery can also be used to help identify well defined business services and use Oracle Tuxedo's SOA framework to expose these COBOL services as first class citizens of an enterprise SOA. This approach can also be applied to PL/I applications automatically migrated to C/C++ and hosted in Tuxedo containers. The bulk of the re-hosted code remains unchanged, but certain key service elements that represent valuable re-use opportunities are exposed as Web Services or ESB business services. This approach protects investment in the business logic of the legacy applications by enabling COBOL components to be extended to SOA using native Web Services gateway, ESB integration, MQ integration, and so on of the Oracle Tuxedo — a modern TP/Application Server platform for COBOL, C, and C++.

Thus, we gain a huge advantage by having a well structured, SOA-enabled architecture on a new platform that was delivered with a high degree of automation. Using a proven application platform with built-in SOA capabilities, including native Web Services support, ESB transport, transparent J2EE integration, and integration with meta-data repository for full services lifecycle governance, makes this a low-risk approach. It also helps to address some of the key considerations in SOA integration table above. With this approach we have the ability to extend and integrate the legacy environment easier than a pure re-host, while benefitting from the automation that ensures high speed of delivery and low risk that is comparable to a black-box re-hosting.

The other aspect of this process is identifying components that will benefit from re-architecture — usually code with low maintainability index or code requiring significant changes to meet new business needs — and using re-architecture techniques to re-cast it as a new components, such as business process, declarative rules in a business engine, or re-coded J2EE components. The key is to ensure that the re-architected components remain transparently integrated with the bulk of the re-hosted code, so that the COBOL or C/C++ code out-side of the selected components doesn't have to be changed. With Oracle Tuxedo this is done via transparent bi-directional support for Web Services (using Oracle SALT) and J2EE integration (using WebLogic-Tuxedo Connector). The key guidelines listed for business rules-extraction and model mining apply to the components selected for re-architecture.

Re-hosting based modernization is sometimes referred to as Re-host++. This term highlights its roots in re-hosting applications to a compatible technology stack together with the broad range of re-engineering, SOA integration, and re-architecting options it enables. This unique methodology is supported by a combination of an extensible COBOL, C, and C++ application platform — Oracle Tuxedo, with flexible, rules-driven automated conversion tools from Oracle's modernization partners.

Re-Hosting Based Modernization

Data Modernization

Here we look at strategies to modernization — a set of data stores that are stored across disparate and heterogeneous sources. We often have problems with accessing and managing legacy data. There is an increase in cost to run batch jobs, which generate reports 24 to 48 hours after they are needed. Further, this legacy data often needs to be integrated with other database systems that are located on different platforms. So, from a business perspective, there is a real problem in getting actionable data in a reasonable amount of time, and at a low cost.

With Data Modernization solutions, we can look at leaving legacy data on the mainframe, pulling it out in near real time, lowering MIPS costs by processing reports outside of the batch winding, and integrating this with heterogeneous data sources. This is leveraged through employing several technologies in concert.

Legacy Adapters

With a collection of legacy adapters provided by Oracle and our partners, we are able to enable the organization to access almost any data store with any environment. Further, many of these technologies can employ bidirectional change data capture so that we can publish data changes to a data warehouse in near real time.

The following is the current list of legacy adapter and change data capture partners that are a part of the Oracle Modernization Alliance. The following is the list of legacy adapter/data migration partners along with their respective URLs:

With legacy data adapters, we can access relational and nonrelational data stores. So, once we have access to this information, we then need to rationalize that to a common data set. This is where we can employ Oracle's ETL Tools.

Using Oracle Data Integrator (ODI), or Oracle Warehouse Builder, we can then connect to these data stores to do the data mapping and transformation. With ODI, we can integrate with high volume and event-driven processes. Once this has been transformed to the target database, we then employ Oracle Business Intelligence.

With Business Activity Monitoring, we can extend the notion of Business Intelligence and gain a real-time view into the business processes of the organization. Managers can now transform their core business from disparate data, expensive, and stale reports to a vision into the discrete business processing that drives the organization. This enables organizations to leverage their legacy assets to deliver not only key business intelligence, but also correlate these processes to key performance indicators (KPIs).

Finally, another important feature of this solution is that it is noninvasive. Many mainframe shops do not go for the large scale modernization efforts due to the perception of high risk involved. With data modernization, an organization can retain the entire legacy infrastructure without changing it. We would rather employ SOA technologies to extend the mainframe, while simultaneously lowering MIPS costs.

From a visual perspective, we can go from this type of latent and expensive static reporting:

Legacy Adapters

To this, leveraging Business Activity Monitoring/Business Intelligent in a fully integrated manner.

Legacy Adapters

Replacement

Commercial Off-the-Shelf (COTS) products are frequently considered when a modernization project is undertaken. If the target package exists, COTS Replacement can be a highly cost-effective strategy with a significant reduction in risk. This works very well with common applications such as billing, HR, payroll, and other more commonly used applications.

The implementation of a COTS solution assumes that the organization is willing to adapt to the new system paradigm. Therefore, core business processes must be altered and adapted. Another aspect of a COTS Replacement built on a SOA framework is that one can utilize this new architecture for other component orchestration. Oracle Applications are built upon Fusion Middleware, which can enable an entire organization to integrate heterogeneous applications and data faster.

Replacement
主站蜘蛛池模板: 绩溪县| 安塞县| 襄城县| 景宁| 成都市| 沂南县| 锦州市| 南和县| 普宁市| 江城| 内乡县| 南溪县| 牡丹江市| 阿克苏市| 英山县| 廊坊市| 亳州市| 罗甸县| 日照市| 镇远县| 桦甸市| 嘉祥县| 双牌县| 永平县| 清涧县| 方正县| 昂仁县| 洛扎县| 江达县| 建水县| 涟源市| 普陀区| 深州市| 石屏县| 原阳县| 栾城县| 芦山县| 汝南县| 嵊泗县| 新乐市| 钟山县|