Why network infrastructure is essential for financial institutions


The number is staggering: $431 million.  That was the cost of a 2018 technology outage reported by TSB Banking Group. The issue, caused by a data transfer mistake, locked accountholders out for weeks.   The company bled at least 80,000 customers and a government probe, propelled by TSB’s repeated payday outages in the months since, is still ongoing.  

TSB isn’t alone. Last summer, Visa cardholders in Europe experienced service disruptions following a hardware failure.   And earlier this year, Wells Fargo in the U.S. suffered two high-profile outages in the course of a week. 

In our line of business, I frequently talk about the cost of downtime. As these firms discovered, it’s not limited to the immediate financial price, which often reaches thousands of dollars per minute, according to Gartner.  There’s also the damage to reputation, sometimes irrecoverable. Affected customers may never trust or support the business again, and regulatory repercussions are ratcheting up. Should actual data loss occur, the potential legal ramifications for financial institutions are all but incalculable.

Reliability matters in the FinTech battle

The stakes for financial services institutions are higher than ever. Traditional banks and credit unions are fighting a wave of disruption from Silicon Valley companies and their ilk. Cloud-first, mobile-savvy startups are shaving off individual financial functions. The FinTech market has developed from the age of PayPal to include Venmo as a mobile wallet, Square for mobile payments, Robinhood for stock trading, and many more.

The stalwarts are responding with their own solutions, such as Zelle. But one of the biggest advantages for incumbents is holding the high ground in reliability. Banks’ perceived stodginess can be a plus when customers consider which organization to trust with their hard-earned money. With every headline-making outage denying them access to their accounts or mucking up their transactions, however, this differentiator comes further into question.

On the other side of the equation, the upstarts taking it to the financial services leaders based on their technology background will have to earn their place with impeccable service. Six-nines uptime will be required to compete in the banks’ backyard. Any growth-related hiccoughs could easily stop the gains for an otherwise promising company.

In this duke-it-out environment, downtime kills. So how does a financial services contender ensure their name is never plastered across the news for going offline? A back-to-basics approach to infrastructure is, ironically, the foundation for the more agile and mobile future.

Information is (or can be) Power

You know the saying, “if you can’t measure it, you can’t improve it.” Although not always true, it has real application to technology infrastructure. IT leaders need transparency with regards to all technology assets and the performance of each piece of equipment across the enterprise. The details can easily extend to 10 million network and data points worth the effort to monitor and track. It’s a lot of information, but put through the right analytics, it becomes an actionable knowledge base on which to judge infrastructure decisions.

The advantages of robust business intelligence dashboards applied to technology infrastructure include:

  • A hard-and-fast number for total cost of ownership (TCO). This provides a present-day dollar figure for the budget and a basis on which to compare potential infrastructure changes and additions.
  • Granular, enterprise-wide performance analytics. How else can you know what’s working and what’s not, where to invest and where to maintain the status quo?
  • An understanding of the true causes of failure. The point of failure is not always the end of the story. Getting to the real root cause enables proactive solutions to prevent future downtime.

Don’t break the bank (#punalwaysintended)

One assumption is that throwing money at infrastructure will solve any problem. While it’s obviously not possible to run a financial institution on Commodore 64s and duct tape, the most expensive solution is not always the best one.

Every organization is weighing the budget and evaluating the IT spend as a percentage of revenue, growth, and/or ROI on strategic projects. To allocate infrastructure dollars intelligently requires a deep understanding of the capabilities of the current hardware. What pieces of equipment are up to the task at hand? What gear is failing for a specific application but could be repurposed elsewhere? What products don’t currently meet OEM standards and could stand refurbishment? And what hardware is simply beyond its useful life for the organization but retains value on the resale market?

These are important questions to answer, but the primary sources of information excel in obfuscation. For example, the original equipment manufacturers (OEMs)—the big names like IBM, Dell EMC, NetApp, etc.—work hard to push out positive reports on their newest equipment. What’s more, their sales and channel representatives—and sometimes even the field support engineers—are quick to suggest an upgrade as the solution to whatever the current issue may be.

Getting unbiased input on existing assets and realistic comparisons with equipment possibly worthy of investment is a constant challenge. Many companies are left to self-administer the appropriate grain-of-salt dose as they weed through the promotional materials and navigate toward cost-efficient infrastructure decisions. Others have found third parties not associated with hardware manufacturing to be important resources. When also delivering full transparency, such allies can reduce the research burden and improve outcomes related to technology acquisition, retention, and deployment choices.

Put out the fires—and fast

We aren’t going full “Murphy’s Law” with this recommendation. Many potential causes of downtime can be avoided through consistent maintenance and a proactive stance toward asset optimization, as detailed above. Perfection will nonetheless remain out of reach, so it’s essential to integrate high-quality backup, repair, and restore capabilities to address issues as soon as they arise, minimize downtime, and protect against any data loss.

The greatest challenge here tends to be expertise. First, there is the variety of equipment to take into account. Financial institutions are known for avoiding a “rip and replace” approach to their data centers,  so there is often a need for engineers with knowledge of everything from old school mainframes to the latest software-defined networking products.

That alone makes for a complicated HR situation, but it also sends financial services companies headlong into the IT talent gap. There simply aren’t enough knowledgeable IT professionals to go around, so keeping the engineering roster full has become difficult and expensive.

This is part of the reason why alternative maintenance, support, and repair solutions—generally referred to by the shorthand “third party maintenance”—have become popular. They provide organizations access to an existing cadre of expert IT engineers, who can provide quick-response diagnostics, troubleshooting, and break/fix. It’s almost always cheaper than retaining similar talent in house, and the alternative providers omit the bureaucracies (not to mention the high price) of the standard OEM support contracts.

With everything on the line in a rapidly evolving financial sector, it’s vital to find the right partners to maximize infrastructure transparency, provide “no hidden agenda” advice, and respond to infrastructure failures at a moment’s notice. The outages avoided could easily represent millions of dollars in savings, while a more knowledgeable, efficient approach to infrastructure investment can lower total cost of ownership and free up money for strategic projects. It’s a win-win for any financial services institution looking for victory in the market.