Thought Leadership
Bahl & Gaynor Insights
January 2025
As the market continues to digest and refine its understanding of the implications of DeepSeek, and several of the “Magnificent 7” cohort of companies have reported earnings, we provide updated thoughts to our initial post:
- Transformer-based artificial intelligence (AI) models may have fundamental limitations to improvement through training alone. Other methods, such as combining models (distillation), will likely be a source of future efficiency gains and improvement.
- Reinforcement Leaning, which is an innovative reasoning method reflected in DeepSeek’s model, will likely increase computing demand for inference (output to users) – this further validates Jevon’s Paradox, where increased efficiency and lower cost drive future demand (for LLMs in this case) and is evident in capex commitments reaffirmed across recent “Magnificent 7” cohort earnings reports.
- DeepSeek’s model training costs are likely understated as they exclude significant capex items. Further, the company’s demonstrated ability to deliver inference queries to a dramatically expanded user base likely points to the existence of a far larger (and costlier) compute cluster than stated publicly.
BAHL & GAYNOR VIEW
- Though transformer-based AI may be nearing the limits of improvement through training, this points to potentially more innovation through inference advancements – increasing inference compute demand and hardware requirements.
- Innovations in inference may broaden the channels for its delivery. A major focus of AI is evolving toward delivery through an “agent” (interactions that assist and drive human process productivity). This concept reinforces the relevance of edge (near-to-the-user) cloud and compute – a major focus of Bahl & Gaynor’s technology investment exposure across our strategies.
- Once an AI model has been trained, it can be run on an expanded range of hardware, particularly custom silicon. With a shift from efficiency gains through training to inference innovation, custom silicon will likely become a more critical component of model operator technology investment – another major focus of Bahl & Gaynor’s technology investment exposure across our strategies.
BOTTOM LINE
AI infrastructure hardware and software demand will likely continue to evolve based on the location of future efficiency gains for AI models (efficiency from training potentially ceding ground to inference advancement). The benefits of this evolution encompass a wider variety of technology providers than the extremely concentrated set reflected in market performance over the last few years. We at Bahl & Gaynor are excited and well-positioned for this evolution and the broader benefits it will bring to companies employing these technologies and society at large.
Published on 1/31/2025.
The information provided herein is for informational purposes only and does not constitute an offer or solicitation to buy or sell any securities. The views expressed reflect the opinions of Bahl & Gaynor as of the date of this communication and are subject to change. Bahl & Gaynor assumes no liability for the interpretation or use of this report.
Magnificent 7 refers to a group of seven influential U.S. technology companies: Alphabet (the parent company of Google), Amazon, Apple, Meta Platforms (formerly Facebook), Microsoft, NVIDIA, and Tesla. Capital Expenditures (CapEx) refer to funds that an organization invests in acquiring, upgrading, or maintaining long-term assets, such as property, equipment, technology, or infrastructure. Transformer-Based Artificial Intelligence (AI) refers to machine learning models that use the Transformer architecture to process and analyze complex data like text, images, and speech. These models rely on an attention mechanism to establish relationships between data elements, providing more accurate and context-aware outputs. This technology is widely used in language processing, image recognition, and predictive analytics. Notable examples include GPT (Generative Pre-trained Transformer) for text generation and BERT (Bidirectional Encoder Representations from Transformers) for text understanding. Transformer-based AI enhances industries by improving automation, content generation, and decision-making processes. Reinforcement Learning (RL) is a machine learning approach where an agent learns by interacting with its environment. The agent receives rewards for positive outcomes and penalties for negative ones, refining its actions over time to achieve specific objectives. RL is used in robotics, autonomous systems, game AI, and other decision-making processes where traditional programming is less effective. Inference Queries involve asking a system (like an AI model or database) to draw conclusions from available data or a pre-existing knowledge base. In machine learning, this means applying a trained model to new data to generate predictions, classifications, or decisions. Inference is critical for decision-making in AI systems, such as classifying text sentiment or deducing relationships based on known information. Inference Advancements refer to improvements in the efficiency, accuracy, and scalability of inference processes in AI and machine learning. These developments enable trained models to quickly and effectively make predictions, classifications, or decisions based on new data. They help make AI applications more practical and deployable, from recommendation systems to autonomous vehicles. Custom Silicon refers to semiconductor chips designed for specific applications, rather than general-purpose designs. These chips optimize performance, power efficiency, and cost for tasks like AI algorithms, consumer electronics, or advanced computational processes in data centers.