ITE i.t.& e limited

problem growing exponentially

  1. 477 Posts.
    More articles such as this appear to be surfacing every day as the problem of managing large numbers of complex transactions grows exponentionally.

    http://www.finextra.com/Finextra-downloads//featuredocs/eEMSWhitepaper.pdf

    Market Dynamics Drive Operational Risk
    A host of factors is making operational risk more challenging and more important to control:
    Trade and transaction volumes are exploding with the advent of new channels, new markets
    and new products.
    Data feeds are mushrooming as consumers adopt more channels to access, manage and
    transmit their money.
    Rapid innovation has produced increasingly complex financial products and trading activities.
    Globalization of markets has increased the number of settlement and clearing processes in
    local markets, introduced time zone issues and forced companies to synthesize data flowing
    through geographically dispersed operation centers.
    Extended trading hours have compressed the processing window in which reconciliations and
    exception management must take place.
    Sarbanes-Oxley, Basel II and other regulations have increased demand for customized, ad-hoc
    reporting and created the imperative for more dynamic management reporting on risk and
    compliance within the enterprise.
    These inexorable trends are placing extreme duress on the technological infrastructures of most financial
    services organizations. The fact is most technologies in place today were created to manage a world that simply
    does not exist anymore. The next-generation operational risk control solution must be capable of dynamically
    adapting to new situations, integrating data from strikingly disparate “silos” and feeds, and creating an
    enterprise-wide view of the exceptions and risks the organization is encountering. Jumping these hurdles
    without tripping up entails first understanding the playing field.
    Technology Is the Problem. Technology Is the Answer.
    Technological barriers to a comprehensive exception management solution reside both inside and outside the
    financial institution:
    Legacy applications have typically been built to handle very specific problems; large financial institutions
    tend to have a different system for every application. Rarely are they flexible or extensible enough to adapt to
    new products and new environmental dynamics such as those discussed above. The result is a cost structure
    driven by the need to hire outside programmers to rewrite databases or applications whenever the environment
    changes. These legacy applications tend to be expensive to maintain, limit themselves to batch processing in a
    real-time world, and offer little consistency among platforms and technologies. Further, legacy systems may not
    be capable of delivering data in a usable format to other systems that require it. What served the organization
    reasonably well in its day has simply been outgrown by current demands.
    Inconsistency between applications creates further complexity by generating “process silos” within the
    organization. Because each process must reside on a particular application, there is a tremendous amount of
    duplicated effort and essentially no way to identify risk and compliance issues that cut across process silos. Even
    more challenging, the number of acquisitions over the past two decades has resulted in a lot of duplication;
    ProfitStars has one client whose acquisitions resulted in operating seven separate teller systems within its
    institution. They have to reconcile and integrate data across seven platforms that cannot even talk to one
    another. The result is a hodge-podge of handcrafted reports and documents that must be cross-referenced in an
    attempt to manually solve the problem. One can imagine the opportunity for human error to creep in to such a
    system. Even in less extreme circumstances, silos of information and processes make it virtually impossible for
    management to get an enterprise-wide look at what’s happening in its business.
    Excessive dependence on IT resources to maintain legacy applications drives further inefficiencies into
    the organization. Legacy applications are not designed to be configured or maintained by line-of-business
    administrators and therefore the line of business cannot respond dynamically to evolving needs. IT
    departments — always stretched thin — end up creating bottlenecks for the business. We have seen new
    product initiatives held up for months — even a year or more — as IT tries to catch up. In another instance,
    a trust organization went to market with its new product, but had to bring in extra labor resources just to flag
    and process exceptions its software should have been handling.
    Manual processing of exceptions remains a typical business process in many financial institutions - even
    though the automated system flags the exception, it must still be processed by hand. In fact, this is the biggest
    issue our clients seem to deal with. Manual processes surrounding the automation technology simply create
    more opportunities for errors and unquestionably slow the processing speed. One company we consulted was
    producing 11,000 pages of exceptions per day — virtually all of them timing discrepancies. The lack of an
    automated solution meant it had to comb through these lists manually, at great risk of overlooking the real,
    dangerous exceptions hidden within. Finally, manual systems virtually never provide a complete audit trail, a
    fundamental necessity for compliance and safe business practice.
    Connectivity within and between firms increases the potential for exceptions (and therefore compliance issues
    and transactional risks) due to the difficulty of communicating seamlessly between firms or divisions. In our
    globalized market, this can be a massive issue. One client in the securities industry receives 40 unique data
    feeds from four different data centers throughout the day, representing hundreds of thousands to millions of
    transactions. Without an automated solution to move data among systems, the variety of industry “standards”
    and data formats means the operational risk manager is unlikely to keep up.
    Managing Data for Operational Risk Control
    Managing data as a key enterprise asset is the key to achieving mastery in operational risk control. Any
    successful solution must deliver the ability to get data from any source, manipulate that data into a form that is
    best suited for analysis and processing, and deliver that data to any destination in a prescribed format.
    This sounds simple enough, but when you consider all of the aforementioned complexities and the profusion of
    technology hardware and software platforms on which that data may reside, the challenges become evident.
    Data and exception workflow solutions are of great value in that they may be deployed across a firm’s
    technology infrastructure in a variety of ways, based upon the needs of the organization. Certainly, such
    solutions may be used to replace existing platforms, particularly reconciliation and exception management
    platforms and ETL (Extract/Transform/Load) or data reformatting tools. These solutions may also be
    deployed in a complementary fashion, used to provide risk management functionality, exception workflow
    functionality, data management capabilities and compliance capabilities around core matching engines that
    do not necessarily provide these elements. For example, legacy reconciliation programs may be incapable of
    handling “new” reconciliations of esoteric products. A financial institution may retain its legacy application
    for handling the basics, and add a next - generation matching/reconciliation application to manage newer
    products. In doing so, it may find that it eventually wants to migrate all reconciliations to the next-generation
    system, which will be perfectly capable of handling it.
    Eight Attributes of Next-Generation Operational Risk Control Solutions
    The future-looking data system must deliver a number of outcomes for the business. Without these
    fundamentals, the business risks becoming “painted into a corner” by its technology — in much the same way
    we described the legacy systems doing earlier.
    A data solution does not reside in the middleware space, as it is not intended to act as the bus that moves data
    from one point to another. Nor is such a solution merely an ETL tool, though it will certainly deliver all of the
    functionality of an ETL. The data solution must move beyond these functions to provide the ability to apply
    business and comparison rules that facilitate exception workflow management, delivering at minimum the
    following functionalities:

    1) Data Capture and Extraction
    The first function your solution must perform is “getting the data.” With the explosion of technology over the
    past two decades, that data might exist in any form, anywhere. Your solution must be able to proactively get
    data from any source based on a set of user-defined rules and extract only the subset of data from that source
    that is required for a given process. For instance, the solution should be equally easy to use with point-to-point
    data capture from application databases, data warehouses and files, as well as from message queues such as
    MQ Series. It should be able to extract data based on record type, data characteristics (numeric, text, date,
    etc.), surrounding characters, look-up table references, and so forth. Accepted format types must go beyond
    tagged, delimited or fixed width formats (most extraction tools can handle these) in order to accommodate
    report-based data. Such data is typically delivered in a nonstandard or inconsistent format in which the data
    identified for extraction is not always found in the same place.
    The key to achieving this is the ability to configure your solution with rules regarding where and when to
    capture data and what data to extract. These rules should enable both scheduled processing and event-based
    processing. The solution should be capable of real-time processing and not solely batch processing — the data
    solution must not itself become a bottleneck.
    It is critical that captured data not be required to be loaded into a database. Instead, the option of simply
    processing the data and then distributing the data must be available. (Not everyone wants to create a data
    warehouse or feed an application data store, which unnecessarily increase cost, storage space, maintenance and
    complexity.)
    The data solution must be able to perform all of these functions while enabling simple user configuration.
    If a data solution forces users to rely on IT resources it loses a great deal of its timeliness and effectiveness
    because the business cannot dynamically react to changing circumstances. For example, a new security type or
    transaction type may require an input format to change. This in turn requires the data mapping to change. The
    value to the business is the ability to address such changes quickly without having to rely on IT.
    This suggests the skill set required from the line-of-business is that of a business analyst with a strong technical
    background. This individual can participate in building solutions and can be responsible for maintaining this
    type of solution with little input from the technology side of the firm.
    2) Rules-Driven Processes
    Once data has been extracted, the solution must be able to analyze, compare, perform calculations on and
    apply conversion or reformatting rules to the data. It must be able to utilize look-up tables, external scripts and
    if-then analyses to reshape and enhance data into a form in which it becomes most useful.
    Rules should drive business processes. It is here that data exceptions are identified and categorized. This
    includes but is not limited to reconciliation processing, compliance monitoring and risk analysis. Once
    exceptions are defined, the business rules dictate the workflow to resolve those exceptions. This includes
    defining the responsible party(s), escalation (both manual and event-based), notification and correction
    processes.

    3) Data Publishing
    The job is rarely finished in the data application. The application must be able to proactively deliver that
    data in its most useful format to any destination based upon rules. Exception management software and
    process has its roots in the reconciliation process; reconciliation is frequently the jumping-off point for
    more flexible and powerful exception management applications. The integration of these platforms creates
    significant opportunity for the business since reconciliation is often the gateway to cleanse data before it goes
    to key record-keeping applications. The real benefit lies in being able to deliver data back to the source or to
    downstream applications or repositories once it has been validated, cleansed, and matched or tested. In this
    way the data solution facilitates automated communication between legacy systems without difficult and
    expensive custom coding.
    4) Categorize Exceptions
    Once the data has been captured, it can be managed in the exception workflow suite. An exception workflow
    solution must strive to provide automation across the entire range of functions involved. A loss of automation at
    any point in the exception resolution process increases the likelihood that additional mistakes will be made and
    severely diminishes the audit and control aspect of an exception management process.
    The rules engine is the heart of the system’s ability to identify and categorize exceptions. The user-defined rules
    are critical to accommodating new processes and dynamic environments that lead to new types of exceptions.
    The best exception management system will have a sophisticated yet user-friendly rules system that allows the
    administrator to configure even the most complex routines with ease and without programming.
    5) Direct Responsibility and Escalation
    When an exception is identified, it must be routed to the appropriate party in a proactive manner. Once again,
    the user-configured rules should be capable of defining “who gets what.” In this manner, exceptions will be
    immediately presented to the individual with responsibility and authority for resolving them. Further, rules
    must be available to escalate the exception to other parties based on user action, events or timing.
    6) Trigger Automated Research
    Almost every exception comes with the question, “why did that happen?” Resolving exceptions means
    researching exceptions. So taking the application one step further, rules must be capable of automating
    research. For instance, the exception management application will interrogate other data stores based on the
    exception type and present that data back to the user along with the exception itself. In this manner, the
    exception may be resolved directly because all supporting data is right there.
    In legacy solutions, this process is nearly always a manual effort, which naturally creates unaudited gaps in the
    process, introduces new opportunity for error, slows resolution and decreases productivity.
    7) Automated Resolution
    Finally, the exception management system must be capable of automating resolution of the exception at its
    source. With properly configured rules, thousands of exceptions can be corrected with no human intervention
    whatsoever. To achieve this objective the data management solution must be able to create, format and send
    correcting data directly to the source systems. It is particularly important that the system leave a finely detailed
    audit trail so that every movement of responsibility and every action is recorded.
    8) Business Intelligence
    A key function of the exception management system must be providing managers at all levels relevant views
    of the organization’s risk profile. A COO needs to see an enterprise-level view of operational risk, while a line
    of business manager needs to be able to spot trends and be alerted if a certain exposure level is tripped. Trendspotting
    and troubleshooting allows the manager to continually improve the business and regularly manage
    downward the operational risk it is exposed to.
    Conclusions
    A system with the flexibility to perform the functions described here is known in the industry as a “generic”
    system. As TowerGroup* writes in a March 2006 analysis of systems that are moving beyond the basics of
    reconciliation:
    Technological advances in vendor solutions are enabling institutions to expand the
    scope of their reconciliation systems beyond reconciling cash and securities to matching
    or comparing related and unrelated data sets. This “generic matching” or “generic
    reconciliation” is accomplished by loosening the constraints in the matching engine’s
    logic and thinking beyond the traditional logic required….TowerGroup expects that
    generic reconciliations will become the norm, as will the ability for the system to perform
    multiway reconciliations, matching sequentially to three, four or more sources but
    presenting only one exception alert to the user(s).
    *TowerGroup’s reports are titled “Reviewing the Players in the Reconciliation Software Market” (Reference #V46:22M) and
    “Reconciliation Technology: Advancing Beyond Cash and Securities Reconciliation” (Reference #V46:21M), by senior analyst
    Investment Management, Matthew Nelson
    Such generic matching or generic reconciliation is not possible without first deploying a data platform to
    support the reconciliation process on the front end (e.g. acquire, extract, enhance and present data of nearly
    any type from nearly any source) as well as on the back end (e.g. automated exception workflow, automated
    exception research, automated error correction and automated feedback of corrected data to the source or to
    downstream applications or repositories).
    Indeed, more sophisticated, less proprietary solutions create an opportunity today for financial institutions
    to move onto a platform that has virtually unlimited ability to adapt to new products, new markets, new
    regulations — without ever again picking up the code or writing a new database.
 
Add to My Watchlist
What is My Watchlist?
A personalised tool to help users track selected stocks. Delivering real-time notifications on price updates, announcements, and performance stats on each to help make informed investment decisions.

Currently unlisted public company.

arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.