In an email interview, Neeraj Varma, Director – APAC and Japan Sales, Global Data Center, Xilinx explains how application-specific architecture with Adaptable Hardware accelerators based on FPGAs are solving latency and other challenges in Quantitative Finance.
Main image license: Background vector created by pikisuperstar – www.freepik.com
In his current role, Varma is responsible for the business of Xilinx’s Data center portfolio encompassing Compute, Networking and Storage. He drives the sales strategy covering all Direct business with Enterprises, OEMs, and the business through channel partners including VAR/SIs, VADs, and end users.
Neeraj had been with Xilinx for over15 plus years, serving in various business leadership roles that covered diverse market segments and geographies. Prior to Xilinx, he worked with CoreEl Technologies, Synplicity, Memec Asia Pacific and Messung Systems in business, technical and product management roles.
Neeraj has close to 25 years industry experience.
Edited excerpts from the interview follow:
DC: What are the challenges faced in Quantitative Finance, and how does big data analytics help?
Financial institutions run extremely complex optimization algorithms like Monte Carlo, Finite Difference, and they are now using AI/ML for things like sentiment analysis. This needs a vast amount of data, and most of it unstructured. The other challenge is the number of new regulations being imposed on financial institutions globally. This forces this risk-averse industry to make some drastic changes to cover all that extra compute cost that doesn’t generate additional income. Banks are forced to increase their compute density, and to do more with their existing infrastructure, as any expansion isn’t justifiable.
Open banking, digital banking, blockchain technology are other areas that put even more constraints on the infrastructure and pushes for new approaches. We can address this by providing application-specific architecture with Adaptable Hardware accelerators based on our FPGAs. These can process unstructured data at ultra-high performance, while reducing the compute footprint traditionally done using CPUs. FPGA-based accelerators like the Xilinx Alveo provide a flexible architecture that can also be used for applications like machine learning, which is increasingly become a part of big data analytics.
What is an FPGA?
Field Programmable Gate Arrays (FPGAs) are semiconductor devices that are based around a matrix of configurable logic blocks (CLBs) connected via programmable interconnects. FPGAs can be reprogrammed to desired application or functionality requirements after manufacturing. This feature distinguishes FPGAs from Application Specific Integrated Circuits (ASICs), which are custom manufactured for specific design tasks. Although one-time programmable (OTP) FPGAs are available, the dominant types are SRAM based which can be reprogrammed as the design evolves.
DC: How does Grid Computing help portfolio managers in risk management/mitigation?
Grid Computing makes use of large compute infrastructure that may be deployed across geographies, and connected with high-speed networks. The word ‘grid’ is taken to be analogous to the Electrical Power Grid, to make it as easily accessible as the power grid. Grid compute leverages compute power all around the world, rather than a local cluster. Grid computes also tend to be heterogenous in which they use more than one type of processor cores and increasingly use FPGAs as an accelerator to the main CPU. FPGAs provide compute acceleration with the added benefit of ultra-low latency networking. This allows portfolio managers to have the ability to price in real time, by combining the low latency network and the compute on the same hardware platform. A real-time exposure of the bank would better protect it against market crashes and will also provide the necessary transparency requested by the regulators.
FPGAs provide compute acceleration with the added benefit of ultra-low latency networking. This allows portfolio managers to have the ability to price in real time, by combining the low latency network and the compute on the same hardware platform.
DC: How do ultra-low latency and low latency electronics help in algorithmic trading?
Electronic trading by definition is a set of algorithms running on computers automatically taking trading decisions. Since the trading is always on a first-come, first-served basis, there is all the incentive of winning a trade by reaching the exchange first. In the age of High-Frequency Trading (HFT), a few nanoseconds makes a difference between gaining or losing money. So, these HFT systems are dependent on ultra-low latency systems for networking and computing to ensure the quickest path to and from the exchange, and very fast trading algorithm execution.
Xilinx provides low latency deterministic networking solutions with our Solarflare branded Network Interface Cards (NICs) and Alveo FPGA accelerators, to execute the trades with the one of the lowest tick-to-trade latencies in the industry. In addition, we have our Algorithmic Trading Reference Design which enables our customers quickly build their own trading solutions on FPGAs, without having to learn to program FPGAs in hardware languages and they use traditional software languages such as C/C++.
Xilinx not only provides all the required hardware for networking and computing for low and ultra-low latency trading, but also provides solutions for the trading platform itself.
DC: We see increased use of AI/ML in Finance for intelligent and faster decision-making? How is this being enabled by technology?
AI/ML is one of the fastest evolving areas in technology today with new models and methodologies coming out several times a year. As such, a lot of the models have run on CPU and training on GPU’s, but little investment to date has been made to optimise the inference and application of AI. This is due to challenges with power and latency using GPU’s, while ASIC’s go out of date very quickly with the constantly evolving models. FPGA’s have started making waves and being used in inference applications due to their flexibility to change between models, significant latency advantages over CPU and integration, with leading trading system platforms.
Xilinx recently invested in their ML Suite and Alveo platforms to accelerate real-time trading applications. Customers have now started moving their applications from traditional mathematical models such as Black-Scholes to FPGA accelerated ML/AI for Options pricing. It is still early days for AI in finance, but AI adoption is gaining traction for things like sentiment analysis on social media. The winners will be using the right methods on the right hardware to compete.
Black–Scholes is a pricing model used to determine the fair price or theoretical value for a call or a put option based on six variables such as volatility, type of option, underlying stock price, time, strike price, and risk-free rate.
DC: What role will data centers play in the future of banking? Can you also comment about processing at the edge and micro data centers?
Data centres have always played a huge role in banking. In fact, some of the largest data centres built in the past ten years have been owned by major banking groups such as Bank of America and JP Morgan, which rival those by Microsoft and Amazon in size. The large banks in India are no exception and are heavily invested in state-of-the-art data centers. There has been a significant move towards the cloud as many players struggle to keep up with the innovation and rapid pace of technological change. However, the features in the cloud and on-premise data centres still need to use the right technology to ensure their costs are minimised and performance maximised, as well as understanding the type of problem. An example, if it is a constant flow of data, an on-premise solution may make better sense commercially, while a cloud could benefit for more variable workloads. Data centers will continue to provide a key role in banking, but reducing the cost of the data centre and minimising ESG impact is increasingly becoming a concern. Utilising and optimising the hardware stack to enable both power efficient and cost-effective solutions is very important and Xilinx Alveo and FPGA’s are reducing TCO and power, to improve performance.
DC: We heard that AMD is acquiring Xilinx. Is that part of AMD’s data center strategy? How do you see the acquisition panning out?
The acquisition brings together two industry leaders with complementary product portfolios and customers. AMD will offer the industry’s strongest portfolio of high-performance processor technologies, combining CPUs, GPUs, FPGAs, Adaptive SoCs and deep software expertise — to enable leadership computing platforms for cloud, edge and end devices. Together, the combined company will capitalize on opportunities spanning some of the industry’s most important growth segments from the data center to gaming, PCs, communications, automotive, industrial, aerospace and defence.