Home / HPC Leading Edge / AI: FinTech’s Next High-Performance Workload

AI: FinTech’s Next High-Performance Workload

Paul Teich

Paul Teich

High-frequency trading (HFT), classic quantitative risk assessment, and other Big Data driven applications have dominated financial technology (FinTech) high performance computing (HPC) discussions for years. Today’s FinTech environment mirrors what’s happening on the research and development side of HPC: FinTech is investing in deep learning (DL), a promising subset of learning techniques within machine learning, and machine learning is the key pattern analytic component that puts the “intelligence” into artificial intelligence (AI).

What happened? Advances and investment in DL techniques are automating data science. Finding data scientists with relevant domain experience is a bottleneck for Big Data analytics. While tools are available, using them requires a Masters or Ph.D. level understanding of statistics. DL changes that by automatically learning about domains.

New DL techniques underlie the next generation of AI applications, like natural language processing and autonomous vehicles. For FinTech, DL enables sophisticated pattern analysis for HFT and traditional risk analysis.

However, DL also enables financial institutions to incorporate a much richer mix of multi-dimensional data into their analyses, like geospatial and weather data, social media behavior, and other complex real-time data sources. In some applications, DL has surpassed human recognition speed and accuracy.

DL is accurate at multi-dimensional pattern matching, so it is good at detecting anomalies in complex systems.

DL-based risk assessment (insurance fraud detection, trader behavior surveillance, anti-money laundering, etc.) can now incorporate much richer real-world and real-time environmental data. Regulatory compliance can now be treated as a graph computing problem. And DL-based advanced threat detection is becoming a more capable cybersecurity alternative for protecting both data and systems.

The Two DL Phases: Training and Inference

DL’s training phase uses representative data to create a DL model that recognizes patterns embedded in the data. The “learning” part is that DL can automatically recognize patterns in its training data. DL learns by being shown examples, much like humans. Unlike humans, DL systems need a lot of examples, sometimes billions.

DL’s inference phase presents new data to a trained DL model to see if the model recognizes patterns it learned from its training data. Using reinforcement learning techniques, a trained DL model can refine itself, learning from and adapting to local conditions.

Why Does AI Depend on HPC?

There are challenges to fielding and maintaining a DL system: managing the volume of data needed to train DL systems, performing initial training, delivering inference to customers in a timely and accurate manner, periodically retraining models, and securing data. The underlying systems needed to deploy an AI service are: scalable data ingress (both for training and inference), software defined networking to manage data traffic, distributed and massively scalable storage, and dedicated servers designed to accelerate DL training and inference.

Dedicated servers are critical for accelerating AI training and inference. As with any IT purchase, capital and operational expenses (CAPEX and OPEX) play a role in AI services.

The goal for delivering an AI service is to lower the cost per inference while providing accurate, timely inferences. For DL, CAPEX determines how many inferences per second can be delivered per dollar and per server, and OPEX depends on the watts needed to deliver each inference. Training and retraining are measured more on CAPEX than inference is. Accelerating either one generally means higher server prices and higher power consumption. Whether buying a single server or racks full of them, FinTech IT decision makers should consider deploying dense form factor servers that can flexibly house a range of accelerator options.

As DL techniques mature, so will acceleration techniques. The net result will be FinTech AI services that can recognize sophisticated, multi-dimensional patterns in huge volumes of data and then isolate anomalies that no human can detect. For example, MasterCard deployed DL-based AI for faster, more accurate fraud detection during in-flight financial transactions.

Writer William Gibson observed, “The future is already here—it’s just not very evenly distributed.” Leading FinTech shops are already starting to implement AI services. For the rest, AI is the next FinTech HPC workload.

Paul Teich is a principal analyst for TIRIAS Research.

More Info

TIRIAS Research

About DE Guest

This article was contributed to Digital Engineering by a guest author.