FiNETIK – Asia and Latin America – Market News Network

Asia and Latin America News Network focusing on Financial Markets, Energy, Environment, Commodity and Risk, Trading and Data Management

Reference Data: Tech Mahindra Details Global Data Utility Based on Acquired UBS Platform

Tech Mahindra, a business process outsourcing specialist and parent of London-based investment management technology consultancy Citisoft, has repositioned a reference data platform acquired from UBS Global Asset Management to offer an offshore reference data utility aimed at meeting market demand for lower cost, high quality data that can reduce risk and increase efficiency.

The global data utility has been introduced under the Tech Mahindra Managed Data Services brand and offers securities reference data across all asset types, reference data for corporate actions, tax information and end-of-day and intra-day validated pricing data. The utility handles data cleansing and validation, with clients buying licences to access the data.

Tech Mahindra suggests the utility differs from other offerings in the enterprise data management market as it is owned by the company and can be developed. It is also agnostic on data feeds, including 20 from vendors including SIX, Markit, Bloomberg, Thomson Reuters and DTCC.

The company’s first customer is UBS Fund Services in Luxembourg. Under the terms of a five-year services contract with UBS, Tech Mahindra will create and store golden copy data and provide multiple intra-day golden copies to the asset manager. As part of the acquisition and customer deal, Tech Mahindra, which is headquartered in Hyderabad, India, will take on some staff from UBS Global Asset Management who were working on the platform in Luxembourg, but most staff will be located in India.

As a repositioned platform, Tech Mahindra MDS already covers all time zones, markets and asset types, updates 2.5 million issuers on a daily base, receives 200,000 customer price requests and validates 75,000 prices. Some 20,000 corporate actions are checked every day, along with 1,800 tax figures. Looking forward, Tech Mahindra plans to extend these metrics and add reference data around indices and benchmarks, legal entity identifiers and clients.

While Tech Mahindra will lead sales of the service to the banking, financial services and insurance sectors, Citisoft will be able to provide consultancy as necessary. Steve Young, CEO of Citisoft, says Tech Mahindra MDS has been designed to improve data quality and drive down the total cost of data ownership, in turn reducing risk and increasing efficiency. To manage clients’ cost issues, the company has built a toolkit into the data management system that allows users to analyse the cost of owning data, including people, processes and technology. Data quality will be underpinned by service level agreements and key performance indicators will be added as more clients sign up for services and data volumes grow.

Reflecting on the data challenges faced by financial firms, Citisoft Group CEO Jonathan Clark, concludes: “Outsourcing models have evolved over time and attitudes are changing as firms acknowledge that there is a big difference between outsourcing and offshoring, and that captive outsourcing is not an efficient approach. The need is for a commercial relationship with a centralised data utility that can deliver high-quality, accurate data and a lower total cost of ownership.”

Source: Reference Data Review, 24.07.2013

Filed under: Corporate Action, Data Management, Data Vendor, Market Data, Reference Data, Standards, , , , , , , , , , , , , , , , ,

Outsourcing Reference Data Management: Cost Reduction and New Revenue Opportunities

The past 12 months has seen the emergence of new players offering Business Process Outsourcing (BPO) services for Reference Data Management. These new arrivals expand the range of options available to financial institutions for addressing the challenges of regulatory compliance, operational cost reduction and scalability.

But BPO has other benefits, and innovative adopters have benefited from using the model to create new value-added services. By catering to clients’ data management needs, these players have been able to transform what’s traditionally been considered a cost centre into a new and significant source of revenue.

This paper – from AIM Software – explores this exciting new trend, and describes how an established financial institution took advantage of BPO to turn its enterprise data management initiative into a new source of revenue and business growth.

Download the White Paper Now to Find Out More

Source: A-Team, March 2013

Filed under: Corporate Action, Data Management, Data Vendor, Library, Market Data, Reference Data, Standards, , , , , , , , , , , , , , ,

Symbology: EDI’s Corporate Actions Service Adopts Bloomberg Open Symbology

Free-use Data Tagging System Reduces Costs and Risks in Trading

Exchange Data International (EDI), a premier back office financial data provider, today announced it adopted Bloomberg’s Global Securities Identifiers (‘BBGID’) to name and track all equities securities in its Worldwide Corporate Actions service.

EDI is the latest financial data provider to adopt Bloomberg’s Open Symbology (BSYM), an open and free-use system for naming global securities across all asset classes with a BBGID, a 12 digit alpha-numeric identifier for financial instruments. EDI has implemented BBGID numbers in its equities reference, pricing and corporate actions data feeds. Its Worldwide Corporate Actions service provides detailed information on 50 corporate action event types affecting equities listed on 160 exchanges.

“EDI decided to integrate Bloomberg’s Open Symbology, as it is easily accessible and has no license fee or restrictions on usage,” said Jonathan Bloch, the Chief Executive Officer of EDI. “Bloomberg’s Symbology also advances straight-through processing of equity orders, which aids reporting and compliance management.”

Peter Warms, Global Head of Bloomberg Open Symbology, said, “Existing identifiers that change due to underlying corporate actions introduce inefficiencies, increase costs and add complexity to the data management process. Bloomberg and EDI recognise the importance of comprehensive, open and unchanging identifiers, like the BBGID, in enabling customers to track unique securities consistently and to process corporate action data seamlessly. As BSYM grows in adoption, interoperability across market systems and software using BSYM will improve steadily and reduce operational costs.”

Source: Bobsguide, 24.09.2012

Filed under: Corporate Action, Data Management, Data Vendor, Market Data, Reference Data, Standards, , , , , , , , , , ,

Managing Corporate Actions Risk – January 2010 – IRD – Insight Reference Data

Despite industry efforts to reduce financial losses typically associated  with corporate actions processing, managing risk remains one of the major challenges for the corporate actions industry. On November 18,  Inside Reference Data gathered leading corporate actions professionals  in a web forum to discuss what more could be done to help improve the situation.

Source: Insight Reference Data, 29.01.2010

IRD_Jan2010_ManagingCorporate Action_ Report

Filed under: Corporate Action, Data Management, Library, News, Reference Data, Risk Management, , , , ,

UK asset managers lack confidence in reference data quality – survey

Over a third of UK-based asset managers and banks are not confident in the quality of reference data they use to support trading activity, according to a survey from IT services firm Patni.

The survey of 100 company representatives found that 91% of asset managers do not have a single supplier of reference data, with the remainder admitting that they were not sure of their source at all. Respondents say that an average of six per cent of trades fail as a result of poor reference data.

Yet just half of those questioned say they have not considered outsourcing the management of their reference data to a third party, due to fears of a potential loss of control and security breaches. Meanwhile, the overwhelming reason cited for considering outsourcing is the potential for cost savings, followed by higher levels of accuracy.

Philip Filleul, product manager, reference data, Patni, says: “Many buy-side and sell-side firms are now uncomfortably aware of both the time and costs they devote to purchasing, cleansing and distributing reference data, as well as the risks that arise when these tasks are not performed effectively, among them failed trades and lost revenue opportunities.”

“The twin pressures of achieving regulatory compliance and straight-through processing have highlighted substantial redundancy and duplication of effort in the area of reference data management.

“One in ten trades fail on first settlement attempt – and of these, 60 per cent -70 per cent can be attributed to poor data management. “

Research from the Tower Group, which was cited by the report, showed that nearly two thirds of failed trades did so due to inaccurate data.

Source: Finextra, Bobsguide, 29.10.2010

Filed under: Corporate Action, Data Management, Market Data, Reference Data, Risk Management, Standards, , , , , , , , ,

Corporate Actions Report Sept 2009 – Reference Data Review

Download: Reference Data Review Special Report Corporate Actions 2009 Edition

Rather than detracting attention away from corporate actions automation projects, the financial crisis appears to have accentuated the importance of the vital nature of this data. Financial institutions are more aware than ever before of the impact that inaccurate corporate actions data has on their bottom lines as a result of the increased focus on risk management in the market as a whole.

This renewed focus on the basics of data management has, in turn, spurred on vendors in the space to significantly up their game. The focus of this innovation has been on bringing prices down, making the implementation of these solutions easier and designing more intuitive user interfaces. This has manifested itself in a range of enhancements, not least of which are the deployment of web-based front ends and software as a service (SaaS) models.

Financial institutions and (surprisingly) issuers have also been doing their bit to improve the often complex muddle of corporate actions data via various working groups and standards initiatives. Earlier in 2009, the European issuer community agreed that a framework for shareholder communication and cross border voting is needed in the market. This was then followed by the release of the results of the Corporate Actions Joint Working Group’s standardisation initiative, which is aimed at defining each category of corporate action in the market.

Corporate actions are most certainly back in the spotlight

Filed under: Corporate Action, Data Management, Data Vendor, Library, News, Reference Data, Risk Management, Standards, , , , , , , , ,

BNP Paribas Improves Quality and Efficiency Across Silos with Data Assessment Strategy

Paris – BNP Paribas is in the middle of an enterprise-wide reference data assessment initiative as part of a larger program, which aims to increase efficiency, improve data quality and help manage data costs across silos, Inside Reference Data has learned.

The data assessment exercise, started in spring 2009 and expected to deliver savings by the end of the year, has included reviewing vendor contracts.

“We thought it could be interesting to have another view of the contracts we currently have in place,” says Paris-based Andre Kelekis, senior strategist at BNP Paribas, adding that the first area of focus, without directly impacting the systems, was to make an assessment of all vendors’ contracts and sourcing with the aim of optimizing the sourcing in terms of procurement in every area.

The merger with former Fortis Bank in May 2009, now BNP Paribas Fortis, slowed down the assessment procedure, as the revised scope of the efforts now also includes consolidating contracts and reviewing data spend at the Fortis-side of the business.

Yet, the data inventory is being done, and as part of the next phase, the bank plans to optimize the data feeds without modifying applications or the database.

The bank does not currently have a full enterprise data management (EDM) project in place, but as part of the assessment exercise it is paying close attention to the data to ensure high quality and efficiency.

“At this stage we are not claiming to want a full EDM strategy, but we do want to know if we could have a much more efficient organization, and to find out if this is possible and what needs to be done we are paying close attention to the data,” says Kelekis.

In fact, the bank does not plan to re-architecture its data management systems as part of the assessment initiative. “Modifying the infrastructure could take around two to three years,” says Kelekis, explaining that the current efforts are focused on the data itself.

One of the main drivers behind the data assessment was to be able to overcome the data challenges that come hand in hand with a typically siloed organization and be able to evaluate the levels of data quality within silos.

“By construction we are in a siloed company, business has its priorities and it’s not easy to work on data projects … some still think controlling their systems is better than having to rely on sub-parties,” says Kelekis, adding that being able to start data projects, largely transversal in nature, depends largely on the mentality and culture within the organization.

But Kelekis says that at the moment, he sees some silos developing in the right direction. “The fixed-income system, with its rationalized reference data feed enabling data optimization, for example, is advanced … it could even be used as a model for all the other silos,” he says.

The Push for Governance

The bank does not currently have a data governance program at enterprise level in place, but has facilitated data discussions by introducing a market data and reference data steering committee, which unites professionals from all the various silos within the organization to discuss data management within BNP Paribas, while raising awareness of what this means in terms of costs and systems. This group was created in the early 2000s.

“We are not ready at the moment to put in place a data governance program at the enterprise level,” says Kelekis, adding that as long as market and reference data remain difficult to understand at the top management level, it will be complex to find a global sponsor and put in place a governance strategy.

Communication is key, as Kelekis says it is complex to carry out a global-transverse data project without a global sponsor, especially if the silos do not understand the value good-quality data can bring to their operations.

“The steering committee is a means to share information across the different silos, but it is only very efficient when those representing such silos are top management and have decision-taking power,” says Kelekis, adding that if the silos are not represented at a very high level, the main purpose of the committee is just information gathering.

Source: InsideReferenceData, 18.08.2009 by Carla Mangado

Filed under: Corporate Action, Data Management, News, Reference Data, Risk Management, Standards, , , , , , ,

Data Quality – Understanding and Managment Commitment

This column  will allow you to take the message to management because without, first their understanding and then their commitment, nothing will happen of any significance. I’ve tossed in a couple of points on the cost of poor quality that should capture their attention.

What Is Data Quality?

There are a number of indicators of quality data.

  1. The Data Is Accurate – This means a customer’ name is spelled correctly and the address is correct.  If the Marketing Department doesn’t have the correct profile for the customer, Marketing will attempt to sell them the wrong products and present a disorganized image of the organization.  When data on a company vehicle is entered into the system, it may be valid (a vehicle number that is in the database), but it may be inaccurate (the wrong vehicle number).
  2. The Data Is Stored According To Its Data Types
  3. The Data Has Integrity – The data will not be accidentally destroyed or altered.  Updates will not be lost due to conflicts among concurrent users. Much of this is the responsibility of the DBMS, but proper implementation of the DBMS should not be assumed.  Robust backup and recovery procedures as implemented by the installation are needed to maintain some levels of integrity.  In addition, operational procedures that restrict a batch update from being run twice are also necessary.
  4. The Data Is Consistent – The form and content of the data should be consistent.  This allows for data to be integrated and to be shared by multiple departments across multiple applications and multiple platforms.
  5. The Databases Are Well Designed – A well designed database will perform satisfactorily for its intended applications, it is extendible, and it exploits the integrity capabilities of its DBMS.
  6. The Data Is Not Redundant – In actual practice, no organization has ever totally eliminated redundant data.  In most data warehouse implementations, the data warehouse data is partially redundant with operational data.  For certain performance reasons, and in some distributed environments, an organization may correctly choose to maintain data in more than one place and also maintain the data in more than one form.

The redundant data to be minimized is the data that has been duplicated for none of the reasons stated above but because:

  • The creator of the redundant data was unaware of the existence of available data.
  • The redundant data was created because the availability or performance characteristics of the primary data were unacceptable to the new system. This may be a legitimate reason or it may also be that the performance problem could have been successfully addressed with a new index or a minor tuning effort and that availability could have been improved by better operating procedures.
  • The owner of the primary data would not allow the new developer to view or update the data.
  • The lack of control mechanisms for data update indicated the need for a new version of the data.
  • The lack of security controls dictated the need for a redundant subset of the primary data.

In these cases, redundant data is only the symptom and not the cause of the problem.  Only managerial vision, direction, and a robust data strategy would lead to an environment with less redundant data.

  1. The Data Follows Business Rules – As an example, a loan balance may never be a negative number.  This rule comes from the business side and IT is required to establish the edits to be sure the rule is not violated.
  2. The Data Corresponds To Established Domains – These domains are specified by the owners or users of the data.  The domain would be the set of allowable values or a specified range of values.  In a Human Resource System, the domain of sex is limited to “Male” and “Female.”  “Biyearly” may be accurate but still not an allowable value.
  3. The Data Is Timely – Timeliness is subjective and can only be determined by the users of the data.  The users will specify that monthly, weekly, daily, or real-time data is required.  Real-time data is often a requirement of production systems with on-line-transaction processing (OLTP).  If monthly is all that is required and monthly is delivered, the data is timely.
  4. The Data Is Well Understood – It does no good to have accurate and timely data if the users don’t know what it means.  Naming standards are a necessary (but not sufficient) condition for well-understood data.Data can be documented in the Data Dictionary/Repository, but the creation and validation of the definitions is a time consuming and tedious process. This is, however, time and effort well spent.  Without clear definitions and understanding, the organization will exhaust countless hours trying to determine the meaning of their reports or draw incorrect conclusions from the data displayed on the screens.
  5. The Data Is Integrated – An insurance company needs both agent data and policyholder data.  These are typically two files, databases, or tables that may have no IT connection.  If the data is integrated, meaningful business information can be readily generated from a combination of both the agent and policyholder data.  Database integration generally requires the use of a common DBMS. There is an expectation (often unfulfilled) that all applications using the DBMS will be able to easily access any data residing on the DBMS.  An integrated database would be accessible from a number of applications.  Many different programs in multiple systems could access and, in a controlled manner, update the database.

    Database integration requires the knowledge of the characteristics of the data, what the data means, and where the data resides.  This information would be kept in the Data Dictionary/Repository.

An integrated database would have the following potential benefits:

  • Less redundant data
  • Fewer possibilities for data inconsistency
  • Fewer interface programs (a major resource consumer)
  • Fewer problems with timing discrepancies
  • More timely data
  1. The Data Satisfies The Needs Of The Business – The data has value to the enterprise.  High quality data is useless if it’s not the data needed to run the business.  Marketing needs data on customers and demographic data, Accounts Payable needs data on vendors and product information.
  2. The User Is Satisfied With The Quality Of The Data And The Information Derived From That Data – While this is a subjective measure, it is, arguably, the most important indicator of all.  If the data is of high quality, but the user is still dissatisfied, you or your boss will be out of a job.
  3. The Data Is Complete –
  4. There Are No Duplicate Records – A mailing list would carry a subscriber, potential buyer, or charity benefactor only once.  You will only receive one letter that gives you the good news that “You may already be a winner!”
  5. Data Anomalies – From the perspective of IT, this may be the worst type of data contamination.  A data anomaly occurs when a data field defined for one purpose is used for another.  For example, a currently unused, but defined field is used for some purpose totally unrelated to its original intent.  A clever programmer may put a negative value in this field (which is always supposed to be positive) as a switch.

Design Reviews

An important set of information to be included in design reviews is the requisite quality of the data under consideration and the actual state of the data.  The basic question to be asked is “How clean, timely, etc. must the data be?”  In the design review, the team members would consider the data source, the process of update and delete, and the quality controls imposed on those accessing the data.

The Design Review would review and validate that standards are being followed. The review process may make recommendations to clean up the data, establish strict controls on shared updating, and assure sufficient training for users who would query the data.

Assessment Of Existing Data Quality

As people overestimate the intelligence of their grandchildren and the sweet nature of their dogs, organizations overestimate the quality of their own data.  A reality check is generally needed.  Poor quality data can be detected in a number of ways:

  • Programs that abnormally terminate with data exceptions.
  • Clients who experience errors or anomalies in their reports and transactions and/or don’t trust their reports or don’t trust the data displayed on their screens.
  • Clients who don’t know or are confused about what the data actually means.
  • On-line inquiry transactions and reports that are useless because the data is old.
  • Data that can not be shared across departments due to lack of data integration.
  • Difficulty for clients to get consolidated reports because the data is not integrated.
  • Programs that don’t balance.
  • In the consolidation of two systems, the merged data causes the system to fail.

Quality may be free but data quality does require an initial investment.  It takes people and resources to bring data to the desired pristine state.  If data is allowed to remain in its current (dirty) state, there may be a substantial cost and disruption to the organization.  Very few organizations understand the costs and exposures of poor quality data.

Impact Of Poor Quality Data

Data is an asset but it can only be an asset if the data is of high quality.  Data can also be a liability if it is inaccurate, untimely, improperly defined, etc.  An organization may be better off not having certain data than having inaccurate data, especially if those relying on the data do not know of its inaccuracy.  A hospital would be better off not knowing a patient’s blood type than believing and trusting it to be “O+.”

Which Data Should Be Improved?

It should be obvious that it’s impossible to improve the quality of all the data in an installation.  The prioritization is much like triage.  The energy should be spent on data where the quality improvement will bring an important benefit to the business.  Other criteria that would suggest data improvement is data that can be fixed and kept clean.  Unimportant data can be ignored.  Data that will become obsolete can also be bypassed.  Examples are:

  1. The business will be bought
  2. The data will be converted because of a new application
  3. A reengineering of the business will cause the certain data to be retired

If the Marketing Department is reviewing the demographics of their customers, the zip code (as part of the address) is important while the rest of the address is less critical.

There will be wide variations in the costs to clean different files and databases. This will enter into any decision about which data to purify. The cost of perfectly accurate data may be prohibitive and may not be cost effective.  Based on the source of the data, accuracy may also be impossibility.

Users of data may be willing to settle for less than totally accurate data.  Even so, it is important that the users know the level of quality they are getting.  A greeting card company asked their retailers to measure the number of linear feet devoted to that company’s card products.  Those who analyzed the data knew the data to be inaccurate but preferred inaccurate data to no data at all.  A large computer manufacturer asked their marketing representatives and technical engineers to report on how they spent their time.  It was well known that the respondents were not keeping very good records themselves and their reports reflected the lack of their concern for accuracy.  Those who analyzed the data knew of the inaccuracies but were looking for trends and significant changes to indicate shifts in how jobs were being performed.  The inaccurate data, in both of these cases, was acceptable.

Purification Process

To clean up the data, the following steps should be followed:

  1. Determine the importance of data quality to the organization.
  2. Assign responsibility for data quality.
  3. Identify the enterprise’s most important data.
  4. Evaluate the quality of the enterprise’s most important data.
  5. Determine users’ and owners’ perception of data quality. – Users will convey their understanding of the data’s quality and will often indicate why the data has problems.
  6. Prioritize which data to purify first.
  7. Assemble and train a team to clean the data.
  8. Select tools to aid in the purification process.
  9. Review data standards.
  10. Incorporate standards in the application development process to ensure that new systems deliver high quality data.
  11. Provide feedback and promote the concept of data quality throughout the organization.

Roles And Responsibilities

The creation and maintenance of quality data is not the sole province of any one department.  The responsibility touches Application Developers, Database Administrators, Data Administrators, Quality Assurance, Data Stewards, Internal Auditors, Project Managers, and most importantly, senior management.  The importance of quality data must be understood by senior management and expressly communicated throughout the organization.  Words are not as important as deeds.  When quality measures appear in performance plans, reviews, and bonuses, people finally believe that quality is important.  It is equally important that time and resources be allocated to development schedules to support management’s commitment to quality.

Impact Of Data Quality On The Data Warehouse

Bad data should never be allowed into the data warehouse unless the problems are recognized and acknowledged by those who will use the data.  Whenever possible, the data should be validated and purified prior to extraction.  If bad data enters the data warehouse, it may have the effect of undermining the confidence of those who access the data.  Clients and IT must be able to rely on the data, regardless of whether it is detailed, summarized, or derived.The effort to clean up data once it is in the data warehouse becomes a major and never-ending task.  It should not be the responsibility of those administering the data warehouse to clean up bad data.  The cleanliness standard puts an additional burden on the stewards of the data to perform validations of the source data.

Assessing The Costs Of Poor Quality Data

It will be difficult to assign real dollars to most of these categories.  If estimates in real dollars are possible, conservative numbers should always be used.  When an organization has experience with any of the following problems and if the costs of fixing those problems have been calculated, those figures can be assigned.

  1. Bad decisions due to incorrect data.
  2. Lost opportunities because the required data was either unavailable or was not credible.
  3. Time and effort to restart and rerun jobs that abnormally terminated due to bad data.
  4. In a buyout situation, accepting too low a price for your business because you cannot properly demonstrate your business potential, or your business seems to be in disarray because your reports are inconsistent.
  5. Fines imposed by regulating authorities for non-compliance or violating a governmental regulation as a result of bad data.
  6. Time and resources to fix a problem identified in an audit.
  7. Hardware, software, and programmer/analyst costs as a result of redundant data.
  8. The costs and repercussions of bad public relations due to bad or inconsistent data. (Ex. A public agency unable to answer questions from the press or from their Board of Directors.)
  9. Time wasted by managers arguing and discussing inconsistent reports which are the result of bad data.
  10. Poor relations with business partners, suppliers, customers, etc. due to overcharging, underpayment, incorrect correspondence, shipping the wrong product, etc.
  11. The time spent correcting inaccurate data.  These corrections may be performed by line personnel or by IT.
  12. The costs of lost business in operational systems because of poor quality data (data was wrong or non existent).  An example is the lost marketing opportunity for an insurance company that does not have accurate information about a client and thus loses the opportunity to market an appropriate insurance product.

Data Quality Feedback To Senior Management

Unlike measurements of performance and availability, the quality of data will not be changing daily.  Quality can, however, be quickly compromised by operating procedures that cause improper batch updates. Those responsible for data will want to make periodic checks to determine trends and progress in improving data quality.  The results should be reported to IT management and to the departments that own the data.

The quality of data can be measured, but before any measurement takes place, the following questions should be answered:

  1. Why is the quality of the data being measured? – The classic answer is that without measurement, management of the data is impossible.
  2. What is being measured? – Some possibilities include:
    1. trends, i.e. is the data getting cleaner or dirtier?,
    2. user satisfaction with the quality of the data
  3. What will be done with the measurements? – Some possibilities include: 1) focus on the data that needs to be purified, 2) provide a basis for cost justifying the purification effort, and 3) give information for prioritizing the cleanup process.


This column identified various categories of data quality, discussed how to identify data quality problems and how to address those problems.  The column gave suggestions for incorporating data quality topics in design reviews.  Roles and responsibilities were discussed.  Also addressed was data quality as it impacts the data warehouse and the necessity of bringing senior management into the picture.

Data is a critical asset for every enterprise.  The quality of the data must be maintained if the enterprise is to make effective use of this most important asset.  Improvements in data quality do not just happen; they are the result of a diligent and on-going process of improvement.

Source:, 20.06.2009 by Sid Adelman read full article at Enterprise Information Management Institute

About The Author

Sid Adelman is a principal consultant with Sid Adelman & Associates, an organization specializing in planning and implementing data warehouses, performing data warehouse and BI assessments, and in establishing effective data strategies.  His web site is

Filed under: Corporate Action, Data Management, Data Vendor, Library, Market Data, News, Reference Data, Risk Management, Standards, , , , , , ,

Operational Risk and Reference Data: Exploring Costs, Capital Requirements and Risk Mitigation

Financial transactions can be thought of as a set of computer encoded data elements that collectively represent 1) standard reference data, identifying it as a specific product bought by a specific counterparty, and 2) variable transaction data such as traded date, quantity and price. The reference data components of a financial transaction identifies it as a specific financial product (security number, symbol, market, etc.), its unique type, terms and conditions (asset class, maturity date, conversion rate, etc.), its manufacturer or supply chain participant (counterparty, dealer, institution, exchange, etc.), its delivery point (delivery, settlement instructions and location), its delivery or inventory price (closing or settlement price) and its currency. Analogous to specifications for manufactured products, reference data also defines the products’ changing specifications (periodic or event driven corporate actions), occasional changes to sub-components (calendar data, credit rating, historical price, beta’s, correlations, volatilities) and seasonal incentives or promotions (dividends, capital distributions and interest payments).

Download: Opertional Risk and Reference Data Nov 2005 Study

This paper documents the impact of increasingly costly and duplicative expenditures for the sourcing, maintenance, and processing of reference data; the effect of recent regulatory mandates on reference data; and the role faulty reference data plays in operational risk and operational capital. While quantifying this impact, reference data remains an area needing further research due to the lack of granularity of cost and loss data compiled to date. As such this paper is the first time the literature of Operational Risk and Reference Data have been drawn together and useful insights presented.

Source: November 2005 by  Allan D.Grody Fotios C. Harmantzis, PhD, Gregory J. Kaple

Filed under: Corporate Action, Data Management, Library, Reference Data, Risk Management, Services, Standards, , , , , , , , , ,

DTCC, Swift and XBRL US team on corporate actions processing

The Depository Trust & Clearing Corporation (DTCC), Swift and XBRL US have joined forces in a bid to improve communications between issuers and investors for corporate action announcements in the US market.

The partners claim they will “fundamentally change” corporate actions announcement processing, bringing greater accuracy, and reduced risks and costs by improving transparency and communication between issuers and investors.

The firms say that on average, approximately 200,000 corporate actions such as dividends, bond redemptions, rights offerings and mergers are announced each year by publicly traded companies and other issuers or offerors in the US.

Because the processing of these announcements throughout the corporate actions lifecycle is mostly handled manually, these practices remain beset by error-prone, time-consuming inefficiencies, creating the potential for heavy losses and significant negative impact on investors.

The partners cite a 2006 study from research firm Oxera which found losses on corporate actions worldwide were between $400 and $900 million each year.

The group’s plan, outlined in a statement of direction, will look to build on the existing ISO standards by integrating the benefits of XBRL electronic data tagging technology, already used by public issuers in the US, to streamline the processing of corporate action announcements.

The three organisations will work together on a corporate actions taxonomy, or classification, aligned with ISO 20022 repository elements.

This new taxonomy will support a seamless transition from issuer-generated documentation to data, using XBRL technology, enabling issuers to tag or electronically capture and identify key data, such as the terms of a reorganisation when preparing documents for a corporate action.

The data “tags” and elements will be aligned with ISO 20022, permitting XBRL-tagged data to be readily converted.

Swift will roll out the new ISO 20022 corporate actions messages on a global basis, which builds on the efficiencies gained through ISO 15022 adoption.

DTCC will make all corporate action announcements it publishes available in the ISO 20022 format beginning in 2010. All existing legacy publication files will be decommissioned by 2015.

Chris Church, CEO, Americas and global head of securities, Swift, says “our initiative will increase the return on investment for the industry’s existing market infrastructures by bringing greater efficiencies and reducing the costs and risks associated with processing corporate actions. Manual interpretation, re-keying and manual exceptions in corporate action processing will be significantly reduced.”

Source: FINEXTRA, 28.05.2009

Filed under: Corporate Action, Data Management, News, Reference Data, Risk Management, Services, Standards, , , , , , , ,

Data Quality key to risk overhaul – survey

Just a third of financial services executives think risk management principles in their business remain sound, with over half conducting or planning a major overhaul of operations, according to a survey by the Economist Intelligence Unit.

The survey of 334 executives, conducted for SAS, shows the improvement of data quality and availability is likely to be the key area of focus in the management of risk over the next three years, cited by 41% of respondents.

Strengthening risk governance is a key area for 33%, developing a firm wide approach to risk is important for 29% and improved technology infrastructure is cited by 24%.

The research highlights a belief that all departments, not just lending, need a clearer picture of risk adjusted performance and the behaviours that influence it.

Virginia Garcia, senior research director, Tower Group, says: “Although technology is not to blame for the widespread financial crisis, rigid technology and business processes have undoubtedly made it difficult for many FSIs to respond rapidly and effectively to the financial crisis. This situation reinforces the business case for a more agile and intelligent enterprise architecture to mitigate risk by helping FSIs adjust to volatile business dynamics.”

Less than a third of those questioned feel regulators handled the financial crisis properly but respondents agree that transparency needs to be heavily emphasised within proposed reforms.

They point to greater disclosure of off-balance-sheet vehicles, stronger regulation of credit rating agencies, and the central clearing for over-the-counter derivatives as initiatives thought to be most beneficial to the financial services industry.

“Now more than ever, this survey confirms the need for the players in financial markets to make transparency a major part of a comprehensive overhaul of risk and performance management to make better business decisions,” says Allan Russell, head, global risk practice, SAS.

Source: Finextra, SAS, 06.05.2009

Filed under: Banking, Corporate Action, Data Management, Market Data, News, Reference Data, Risk Management, , , , , , ,

Risk & Compliance Report March 2009: Reference Data Review

Download: RDR  Risk_&_Compliance Report March 2009  A-TEAM

The current financial climate has meant that risk management and compliance requirements are never far from the minds of the boards of financial institutions. In order to meet the slew of regulations on the horizon, firms are being compelled to invest in their systems in order to cope with the new requirements.

Data management is an integral part of this endeavour, as it represents the building blocks of any form of risk management or regulatory reporting system. Only by first understanding the instruments being traded and the counterparties involved in these trades can an institution hope to be able to get a handle on its risk exposure. The fall of Lehman Brothers and the ensuing chaos was proof enough of the data quality issues still facing the industry in this respect.

Regulators are demanding more and more data transparency from all players in the financial markets and this has meant that the ability to access multiple data silos has become essential. A siloed mentality towards data will no longer be acceptable, as these regulators seek a holistic view of positions and the relationships between counterparties.

All of this represents a significant challenge to the data management community, given that there are standards lacking in many areas, for example business entity identification. But with great challenges come tremendous opportunities to solve data management issues that have been in the background for far too long.

Source: A-TEAM Group 13.03.2009

Filed under: Corporate Action, Data Management, Library, Market Data, News, Reference Data, Risk Management, Services, Standards, , , , , , , , , ,

Corporate Actions Report April 2009 – Inside Reference Data

Download: Corporated Actions April 2009 – Inside Reference Data

This is at least what industry groups hope will happen. In fact, there is already a
working group creating an XBRL taxonomy for corporate actions, and Swift and the
Depository Trust & Clearing Corporation are both working with XBRL on the corporate
actions initiative. So even though change is not expected overnight, the development is
now a lot more encouraging than it was a year back.

Corporate actions and the issuer debate is now, in fact, becoming an important area
to follow. And with this special report, which includes comments from industry experts
and a news review, we hope to give readers the opportunity to keep on top of the latest
developments in the corporate actions space.

Source: Inside Reference Data, April 2009

Filed under: Corporate Action, Data Management, Data Vendor, Exchanges, Library, News, Reference Data, Risk Management, Standards, , , , , , , , , , ,