Last updated: February 1st, 2021
Membership info. Deposit account data. Loan, credit card, and mortgage information. Data captured by web apps and mobile apps.
This information, enhanced by cutting-edge analytics, can be a rich competitive differentiator for credit unions. It can provide them with the speedy insights they need to drive more effective decision-making while concurrently improving operations, product development, sales, and marketing.
But accessing, deduplicating, and standardizing data can be arduous. Credit unions typically store data in a siloed mélange of point solutions, databases, core systems, and legacy systems – housed in on-premises data centers, in the cloud, or in hybrid and multi-cloud environments. Worse, credit unions typically lack a simple, unified process for viewing and accessing all this data: each data store may have a different access method. And few people in the organization understand those methods.
That’s why, according to The Knowlton Group’s Credit Union Executive’s Guide to Data and Analytics, the average billion-dollar credit union spends more than 5,000 hours per year manually extracting and merging data, and producing reports. ibi, a TIBCO company, estimates that many of these credit unions spend 80 percent of their data and analytics time finding and coalescing information – leaving only 20 percent for the type of analysis that leads to true business insight.
In short, it’s a data jungle out there. Credit unions must move from reactive to proactive relationships with data to hack their way through the scrub and arrive at analytical clarity. Currently, many credit unions are stymied by a reactive relationship. Information is siloed, incorrect, or imprecise. Data management processes are incomplete and ad hoc.
In the proactive state, data and platforms are standardized, consolidated, and integrated. Data governance, policies, and procedures are uniform throughout the organization. Employees have easy access to the insights they need to excel in their jobs. And the manual effort required to manage and use data drops dramatically.
Moving to a proactive relationship with data is necessary for the implementation of a smart analytics plan. For analytics to work properly, data must be clean, accurate, deduplicated, consistent, enhanced, and accessible. It must also be standardized and properly formatted so that new systems have an apples-to-apples view of the information studied. This view is especially important when extracting data from legacy systems. These systems record information differently. For example, one financial system may represent one million five hundred thousand dollars as 1,500,000. Another system may use 1.5. These different figures may work perfectly well in their source systems, but they conflict when data is coalesced.
Clearing a data path
Prepping data – and providing easy, ubiquitous access to it – is no easy task. This work requires specialized skills often lacking in even the best IT departments. Prebuilt enterprise data solutions, now available, offer modular deployment capabilities. These solutions include master data management (MDM) platforms and systems that extract, cleanse, integrate, and validate data – delivering a single source of trusted information, enhancing it for real business insight, and then democratizing access. Automated data refresh and reconciliation capabilities ensure that data is current and complete, with little or no IT intervention. The best of these systems also proactively detect and capture bad data before it can infect other systems, and track data-quality problems that require manual correction.
Be strategic about where and when you plant new trees
As important as data prep is, many credit unions spend too much time and money on the impossible goals of achieving perfectly clean, accurate data.
Data errors typically occur at input, and new information enters credit union systems all the time. The question is not how your credit union can perfectly clean its data, but how data deficiencies will affect overall data and analytics goals.
If you haven’t deduplicated data and have no way of telling how many distinct members you have, that’s a problem. If you can’t coalesce identities – if you can’t tell that John Allen (mortgage borrower) is the same person as John T. Allen (credit card holder) and John Thaddeus Allen (savings account depositor), that’s another problem.
Because of these issues, you may think you have three members when you actually have one. You can’t build a complete picture of Mr. Allen’s relationship with the credit union. If, however, you’ve used available enterprise technologies to coalesce and link these entities, if you understand that they are the same person, the fact that the last four digits of Mr. Allen’s phone number are alternately recorded as 3329 and 3328 becomes less of a concern.
Concurrently, while it’s important for all employees to easily access the data needed to do their jobs, credit unions also often spend too much time worrying about building the right type of data repository. They want all their data – generated from deposit accounts, credit cards, loans, mortgages, and more – in one location. They spend too much time worrying about whether a data warehouse or a data lake would work best and too much money implementing those storage solutions.
Data warehouses and data lakes were designed for institutions processing massive amounts of data. If a credit union processes fewer than 10,000 transactions a day, it may not require a data warehouse or lake. In fact, data storage often proves far less important than data access. As long as credit unions have the easy, ubiquitous data access needed to improve business, and as long as data performance is satisfactory, does it matter whether data is consolidated or housed in various data stores? Enterprise solutions now on the market can assist credit unions in accessing data wherever it lives.
To learn more about how to prep data as part of a sound analytics plan, download our new ebook, Forging a path to data and analytics value: a credit union’s guide.