The AI Player That Everyone's Trying To Catch
- Henriot Investment Management LLC

- Oct 21, 2025
- 17 min read
Updated: Oct 24, 2025

Summary
Unmatched AI Scale: Dominating compute and networking at a level no competitor comes close to matching.
Record Growth Momentum: Revenue and margins expanding simultaneously, signaling durable structural demand.
Next-Gen Catalyst Ahead: The upcoming architecture cycle could reset industry performance benchmarks once again.
Priced for Excellence, Still Undervalued: Growth-adjusted valuation remains attractive given leadership position and earnings trajectory.
Investment Thesis
We reaffirm our Buy rating on Nvidia (NASDAQ: NVDA). The company’s dominance in AI infrastructure is translating into explosive growth, and with the current Blackwell GPU ramp in full swing and the next-gen Rubin platform on the horizon, Nvidia’s AI leadership and financial momentum show no signs of slowing.
Nvidia is at the epicenter of what its CEO calls a "new industrial revolution"
driven by AI. Our bullish thesis is simple: Nvidia is the picks-and-shovels leader of the AI age, selling the high-end GPUs and full-stack systems that every major AI project needs. In an AI “arms race” where demand for training large models and deploying AI services is surging globally, Nvidia’s platform has become the default choice. The company’s competitive moat – spanning cutting-edge silicon, software (CUDA, AI frameworks), and an ecosystem adopted by all cloud providers – positions it to capture an outsized share of the trillions in AI infrastructure spending this decade.
We see Nvidia as a unique asset: a hyper-growth company (revenue +56% YoY) at mega-cap scale, with insane profitability (50%+ net margins) and a long runway still ahead. Even after the stock’s huge run, the valuation remains anchored by explosive earnings growth. In short, Nvidia is executing on a level rarely seen – it’s not only riding the AI wave, it built the surfboard. We expect continued upside as the AI adoption curve is still in early innings (chatbots, generative AI, and autonomous machines are just beginning to proliferate). Our thesis is that Nvidia’s dominance in AI compute will translate into sustained revenue growth, high returns on capital, and further shareholder returns (buybacks/dividends), supporting a higher stock price. We remain confidently long.
Nvidia's Business Overview
Nvidia has effectively transformed from a PC graphics company into an AI infrastructure company. The business now spans several segments, but Data Center is by far the star:
Data Center (AI & Cloud): $41.1 billion revenue in Q2 alone (88% of total), up 56% YoY. This includes Nvidia’s AI accelerator GPUs (like H100 and new Blackwell chips), networking hardware (InfiniBand/NVLink switches), and enterprise software. All major cloud providers (AWS, Google, Microsoft, etc.), countless startups, and enterprises are racing to build AI capabilities – and almost all rely on Nvidia’s platform. The company’s GPUs are essentially the “brains” behind large-scale training of AI models and increasingly, the deployment of those models for inference. Nvidia’s data center business has gone from ~$3–4B/quarter two years ago to over $40B/quarter now – a mind-bending rise that reflects how indispensable its products have become in the AI era. Notably, Nvidia is working with governments as well, helping build national AI supercomputers (Europe’s first industrial AI cloud, etc.) to support sovereign AI initiatives. This broad adoption across geographies cements Nvidia’s leadership in AI infrastructure worldwide.
Gaming: $4.3 billion last quarter (+49% YoY). Ironically, what used to be Nvidia’s core business (GeForce GPU chips for PC gaming) now looks small next to AI. Gaming is recovering nicely after the post-pandemic lull – the new Blackwell RTX 50-series graphics cards are selling well (RTX 5060 became the fastest-ramping x60 GPU . But even record gaming revenue is barely one-tenth of data center sales. AI is the dog wagging Nvidia’s tail now, not vice versa. We still appreciate the gaming segment’s steady cash generation and its role in fostering Nvidia’s GPU technology, but investor attention is rightly on bigger fish.
Professional Visualization: $601 million (+32% YoY). This covers workstation GPUs for professionals, visualization, and Nvidia’s Omniverse platform (for 3D simulation and collaboration). It’s a smaller piece, but grew nicely as businesses invest in AI-enhanced design, simulation, and content creation tools. Nvidia is also rolling out new Blackwell-based RTX Pro GPUs for workstations. Importantly, management highlights Omniverse and simulation as key to “physical AI” development (e.g., digital twins for robots)– tying this segment into the broader AI story.
Automotive & Robotics: $586 million (+69% YoY). Nvidia’s automotive AI business (Drive platforms for autonomous vehicles) and robotics/embedded AI (Jetson) are nascent but growing fast. In Q2, Nvidia began shipping its latest Drive Thor automotive AI chips and Jetson AGX Thor modules for robots. These are essentially AI brains for self-driving cars, drones, warehouse robots, etc. Nvidia calls this merging of AI and the physical world “physical AI,” and it could be a massive long-term opportunity as industries automate. One Jetson Thor can power a warehouse robot; millions might be needed across factories, cities, and vehicles. While auto/robotics revenues are still a rounding error relative to data center, Nvidia is planting seeds in a market that could flourish as agentic AI moves from the digital realm into the real world.
Financially, Nvidia is a juggernaut right now. Operating income in Q2 was $28.4B (61% Op margin), and free cash flow is pouring in. The company returned $24.3B to shareholders in the first half via buybacks and dividends, and the Board just topped up the repurchase authorization by another $60B In plain English: Nvidia is making so much cash, they’re running out of places to put it, except back into their own stock. This aggressive buyback signals confidence and should provide support to the share price. It’s remarkable that even after heavy R&D investment ($4.3B last quarter) and capital spending, Nvidia is still swimming in profits – a testament to the economics of being the top dog in an insatiable market.
Growth and Profitability
Nvidia’s growth has reached rarefied heights. The latest quarter saw +56% YoY revenue and +41% QoQ net income – on top of already record-breaking results in the prior year. Such growth at Nvidia’s scale (annual revenue run-rate now ~$180B) is almost unheard of in corporate history. Crucially, this isn’t growth at the expense of margins; it’s highly profitable growth. GAAP gross margin hit 72.4% last quarter (non-GAAP 72.7%), up dramatically from ~60% just a quarter prior. For context, these are software-like margins. Each incremental dollar of AI chip sales is contributing enormously to the bottom line. GAAP EPS was $1.08 (up ~61% YoY), and Nvidia guided an even higher ~$1.20–1.30 for Q3. In fact, Q3 revenue is forecast at $54Bn (another 15% sequential jump), suggesting the growth is accelerating into the second half of FY2026.
What’s driving this exceptional performance? Demand far exceeds supply in Nvidia’s core data center products. Cloud giants and enterprises are racing to install as many H100/Blackwell GPU servers as they can get their hands on, as AI workloads scale up. Nvidia’s challenge has been production, not finding buyers. The company is working closely with TSMC and memory suppliers (HBM3/HBM4) to boost output, but lead times remain long for the hottest AI systems. This supply-constrained, high-demand environment is a recipe for strong pricing power – contributing to those fat margins. Even Nvidia’s older products (e.g. A100 chips) continue to sell out, often to secondary markets or overseas customers facing export restrictions.
It’s also worth noting Nvidia’s operational leverage. While revenue exploded, operating expenses rose at a much slower clip (+38% YoY). The business benefits from past R&D (architectures like Hopper/Blackwell) and a unified software stack; once the heavy lifting is done, selling an extra thousand GPUs is mostly manufacturing cost. Thus, each surge in sales drops a disproportionate amount to profit. Nvidia is effectively printing profits at this stage – and returning a chunk to investors via buybacks, while still hoarding cash for strategic uses (like huge prepayments to TSMC to secure future capacity, acquisitions, etc).
The AI Catalysts
The AI revolution is Nvidia’s playground, and several catalysts will drive its next phase of growth. We break down the key opportunities into two mini-sections: one for the immediate catalyst (the Blackwell GPU rollout fueling today’s results) and one for the coming catalyst (the Rubin platform setting up tomorrow’s gains).
The Blackwell Rollout
Nvidia’s current growth spurt is powered by the transition to its Blackwell architecture GPUs in data centers. Blackwell (the successor to 2022’s “Hopper” H100) is delivering a huge performance leap that has customers clamoring. “Blackwell is the AI platform the world has been waiting for,” Jensen Huang proclaimed on the earnings call, noting it delivers an exceptional generational jump in capability. In practical terms, Blackwell GPUs and systems offer dramatically higher throughput for training and inference, better energy efficiency, and new interconnect tech (NVLink switches for “rack-scale” AI). Demand is extraordinary – Nvidia can’t make them fast enough. In Q2, Blackwell-based data center revenue jumped 17% sequentially and that’s with supply still ramping.
A big part of Blackwell’s appeal is how it enables the next level of AI models. These chips are designed not just for number-crunching, but for what Huang calls “reasoning AI models” – AI that can think in multi-step processes and work with massive context. Think of complex chatbots that carry on lengthy conversations, or AI agents that plan tasks autonomously (what Nvidia dubs agentic AI). Such applications require far more memory and computational persistence, which Blackwell delivers via enormous HBM memory pools and NVLink connectivity that stitches many GPUs into a quasi-supercomputer. Nvidia even proved Blackwell’s prowess by setting records in the MLPerf AI benchmarks, taking the crown in every category of AI model training. In short, Blackwell has no serious competition at the high end right now – it’s the go-to solution for anyone pushing the frontier of AI.
This brings us to a related catalyst: inference. Much of the narrative has been about training giant models, but once trained, those models must be deployed to serve users (answer queries, generate content, etc.). Inference at scale is poised to dwarf training in compute needs, and Nvidia is addressing that head-on. The company introduced a new 4-bit precision format, NVFP4, specifically to speed up AI inference with minimal quality loss. Blackwell GPUs support this 4-bit mode, enabling them to serve things like GPT-style models far more efficiently. This not only expands Nvidia’s market (lots of smaller inference servers will be needed outside of core training clusters) but also protects its margins – by pushing customers to upgrade to the latest chips for better inference throughput. Management highlighted that inference workloads are the next frontier, and Nvidia is rolling out a strategy of “disaggregated inference” – essentially pairing different types of chips to handle different parts of an AI service (one chip builds context, another generates output). Blackwell GPUs, along with certain specialized accelerators, will play central roles in this.
On top of all this, Nvidia continues to widen its moat through full-stack offerings. For example, it launched Spectrum-X networking gear this year to connect sprawling AI data centers, and its software frameworks (like CUDA, cuDNN, TensorRT) remain standard in the industry. All major AI frameworks and models are optimized for Nvidia. This ecosystem lock-in is a huge catalyst for sustained growth: even if a rival chip appears, customers would face significant friction leaving Nvidia’s platform. As Jensen put it, “all global frameworks support NVIDIA... launching on the NVIDIA platform is undoubtedly the most reasonable decision” We agree – the company’s early investments in software are paying off mightily now.
Looking nearer term, continuing Blackwell adoption through 2024 will keep Nvidia’s earnings on fire. The product pipeline here includes the ongoing ramp of Blackwell Ultra (a beefed-up version for the absolute largest models) and derivatives for specific markets (for instance, a potential Blackwell “B” series for China if export rules require lower specs. Every indicator suggests that Blackwell = big money for at least the next year-plus.
The Rubin Setup
Nvidia’s next-generation Rubin GPU platform (depicted above, Rubin CPX module) will pair a new “Rubin” GPU with Nvidia’s first custom CPU, aiming to leapfrog even Blackwell in AI performance.
If Blackwell is today’s story, Rubin is tomorrow’s. Nvidia has already teased its next major architecture, named after Vera Rubin, slated to debut in 2026. This platform is hugely significant: it’s the first time Nvidia will offer a combined CPU+GPU “superchip” for AI. The Rubin GPU (built on TSMC 3 nm) will be accompanied by “Vera,” an Nvidia-designed CPU, tightly integrated to slurp data from memory and feed the GPU. Why does this matter? Because future AI workloads will demand even more synergy between general-purpose computing and accelerated computing. By designing its own CPU to work in lockstep with the GPU, Nvidia can optimize the entire pipeline (from data loading to inferencing) in ways off-the-shelf CPUs can’t. It essentially lets Nvidia control the whole compute stack for AI, which is bad news for CPU vendors and great news for Nvidia’s ability to deliver performance (and capture more wallet share per system).
On paper, Rubin’s specs are eye-popping. The GPUs will deliver ~50 PFLOPs of 4-bit AI compute each – more than double a Blackwell – and come with next-gen HBM4 memory for enormous context windows. In fact, Nvidia has said Rubin is “designed to meet the demands of long-context AI workloads”, including things like code generation, large-scale reasoning, and multimodal agentic. In other words, Rubin is built for AI agents and massive models that need to hold entire books or hours of video in memory while reasoning. This could unlock new applications (imagine AI assistants that can digest entire corporate databases or realistic AI characters in games that remember everything). Technologically, it’s about feeding the beast – providing GPUs with faster memory access, larger caches, and tight CPU coupling so they can chew through unprecedented amounts of data. The upcoming Rubin CPX variant, for example, is optimized for the “context” phase of inference, packing 128GB of cutting-edge GDDR7 memory to handle initial reasoning before handing off to the main Rubin GPU with HBM4 for generation. This split-and-conquer approach is Nvidia’s answer to the inference challenge, and it expects major efficiency gains from it.
Strategically, Rubin extends Nvidia’s roadmap of “planned disruptions.” The company has mapped out the next few years with rare clarity: Blackwell now, Rubin in 2026, and a Rubin Ultra in 2027 (essentially two Rubin GPUs fused for ~100 PFLOPs). For investors, this means we can foresee continued product-driven growth catalysts. Each new generation so far (Pascal → Volta → Ampere → Hopper → Blackwell) has brought not just performance jumps but new revenue peaks for Nvidia. There’s little reason to think Rubin will be different – if anything, it could be bigger, as AI usage by 2026 will be broader and more entrenched across industries. Nvidia has already confirmed Rubin development is on track (chips are taped out and in fab), quashing rumors of delays. Early customer interest is reportedly high, especially from those looking to build advanced “AI factories” (think national labs, cloud titans, and defense agencies that need sovereign capability).
One particularly exciting angle is expanding TAM into areas like sovereign AI and edge AI. Rubin’s design (with integrated CPU and massive memory) could enable more turnkey AI supercomputers that governments or companies deploy internally for sensitive workloads. Nvidia mentioned working with multiple nations on sovereign LLM models using its technology With Rubin, an entire country’s AI infrastructure might be purchasable as a few racks of “Vera Rubin” superpods – a lucrative proposition. Meanwhile on the physical AI side, as agentic AI moves into robots and vehicles, a scaled-down Rubin-derived system-on-chip could empower incredibly smart autonomous machines. Nvidia alluded to AI agents requiring large knowledge-base caches and ultra-fast memory – precisely what Rubin’s architecture is meant to provide. This positions Nvidia to capture value not just in cloud data centers, but in every factory, hospital, and city that stands up its own AI capabilities.
In summary, Rubin is the next big opportunity. It keeps Nvidia two steps ahead of competitors, pushes AI computing to new heights, and widens the use cases (and buyers) for Nvidia’s solutions. It’s the textbook definition of a secular catalyst: one that could drive a whole new upgrade cycle across the tech landscape. We expect more details on Rubin at Nvidia’s 2025 GTC conference, and we suspect early shipments to lead customers could hit late 2025 (for testing) before broad availability in 2026. Investors should watch for any early performance claims – if Rubin is even close to as impactful as promised, Nvidia’s dominance (and pricing power) will carry well into the back half of the decade.
Valuation
With Nvidia’s stock up dramatically in the past couple of years, valuation is a natural question. At ~50× trailing earnings, there’s no denying NVDA carries a premium valuation. However, we’d argue the premium is firmly justified – and relative to its growth, Nvidia might even be undervalued.
First, consider growth-adjusted metrics. Nvidia’s PEG ratio (P/E to growth) is around 0.6x, implying the earnings multiple is less than the earnings growth rate. This is a rare find among large-cap tech. Rival AMD, for instance, has a PEG above 2 (much slower growth for its multiple), and the semiconductor sector median PEG is roughly 1.7. In other words, Nvidia’s stock, expensive as it looks, actually offers more bang-for-buck in growth terms than most peers. That’s a function of how extreme its recent growth has been – and how optimistic the forward estimates have become. Wall Street now expects Nvidia’s EPS to compound extremely fast in the next few years (consensus has FY2026 EPS more than triple FY2024’s). When growth is this high, even a 50× multiple can shrink very quickly. We note that Nvidia’s forward P/E is already down to ~30×, as earnings forecasts have caught up to the price. A year ago, forward P/E was over 60×. The company’s results are effectively “growing into” what initially seemed like an outrageous valuation.
Second, compare peer valuations in context. A trailing P/E of ~50 for Nvidia actually doesn’t look bad when you see AMD at ~137× (trailing) and no trailing earnings at all for Intel (which has been losing money). Even a quality semi like Broadcom is at ~86× TTM due to some one-off accounting quirks (normalized ~20×, but growth is single-digit). EV/EBITDA tells a similar story: NVDA ~44×, AMD ~70×, Broadcom ~52×. The sector median is elevated in the 30–40× range. So Nvidia, while at the high end, is not a crazy outlier relative to peers. And importantly, Nvidia’s quality is higher – its gross margins and returns on capital dwarf those of others. For instance, AMD is only ~50% gross margin and has struggled with profitability in down cycles; Nvidia’s 72% gross margin and asset-light model (outsourced manufacturing) yields much more resilient earnings. We’d argue Nvidia deserves a higher multiple than the average chip maker given its quasi-monopoly and superior financial profile.
Third, there’s the intangible “AI leader” premium. Nvidia has become one of those narrative-driven stocks that commands a scarcity value: if you want pure-play exposure to the AI compute boom, there’s essentially Nvidia and no one else at scale. That has brought in a lot of investors (and yes, some speculative froth at times), but it also means Nvidia can probably sustain a higher valuation as long as the AI secular story is intact. Companies viewed as platform leaders (think Apple in smartphones, Tesla in EVs a few years back) often trade at elevated multiples for extended periods. As long as Nvidia continues executing and growing, we’re comfortable that the market will continue to reward it with a premium valuation. In fact, if growth surprises further to the upside (not hard to imagine given the conservative guidance history), today’s multiples could even compress, leaving more room for stock appreciation.
All told, we believe Nvidia’s current valuation is supported by fundamentals. Could the stock see volatility? Absolutely – at these heights, any hiccup in growth or margins could cause a sharp pullback. But on a 2-3 year view, if Nvidia delivers on the expected earnings trajectory (and we think they might exceed it), the stock still has upside. We would be more concerned if we saw evidence of growth saturation or serious competitive pressure, which could cause the multiple to deflate without earnings filling the gap. So far, we see no such red flags.
Key Risks
No investment is without risk, and Nvidia – despite its strengths – has a few worth noting:
Competition and Technological Change: Competition is circling, drawn by Nvidia’s rich profit pool in AI. AMD is launching its MI300 accelerators to compete in high-end AI, and claims some wins (notably, MI300 is used in the upcoming El Capitan supercomputer). Startups and big tech (e.g. Google’s TPUs, Amazon’s AWS Trainium/Inferentia chips) are also developing AI-specific ASICs. There’s a risk that over the next 2-3 years, competitors narrow the gap or offer niche solutions that cut into segments of Nvidia’s market. That said, Jensen Huang doesn’t sound too worried – he points out that many custom AI chips never make it to mass production due to high complexity, and Nvidia’s full-stack approach (hardware + software + ecosystems) gives it a huge edge. We concur: switching off Nvidia isn’t just about matching a chip spec, it requires an entire platform build-out, which most competitors can’t do easily. But it’s a space to watch. If, for example, a major cloud decides to significantly scale back Nvidia purchases in favor of in-house silicon, it could slow Nvidia’s growth. So far, most have added custom chips alongside Nvidia GPUs, not outright replaced them.
Export Restrictions and Geopolitics: A significant portion of Nvidia’s demand comes from China, which is facing U.S. export restrictions on advanced AI chips. Nvidia has navigated this by offering slightly neutered versions (A800/H800 instead of A100/H100), but the U.S. could tighten rules further (as it did with the new “H20” chip which now requires a license). If geopolitical tensions worsen, Nvidia could effectively be barred from selling high-end chips to China – cutting off a major market (some estimates put 20-25% of data center revenue at risk). Additionally, China is pouring money into domestic AI chip efforts to reduce dependence on Nvidia; companies like Biren and Huawei have announced GPU-like processors. While currently a generation behind, they could improve over time. On the flip side, western governments are increasingly investing in local AI infrastructure (for defense, etc.), which boosts Nvidia, and the company is diversifying demand globally (U.S., Europe, Middle East). Still, geopolitics remains a wildcard that could impact Nvidia’s sales trajectory or supply chain (TSMC in Taiwan being another geopolitical flashpoint).
Supply Constraints: Ironically, one of Nvidia’s biggest “risks” in the near term is simply not being able to fulfill all the demand. The company is capacity constrained on cutting-edge nodes and packaging – there’s only so many 3nm wafers TSMC can produce, and only so many HBM memory stacks to go around. Nvidia has likely made massive prepayments and long-term orders, but any hiccup at suppliers (e.g., delays in fab readiness, yield issues, materials shortages) could delay product deliveries. This risk is somewhat mitigated by Nvidia’s close relationships and its ability to throw cash at the problem (it can pay suppliers to prioritize its orders), but it exists. If customers experience long delays, they might explore alternatives out of necessity. Additionally, high demand has led to extended lead times which could flatten the growth curve until supply catches up.
Valuation & Sentiment: While we argued the valuation is reasonable relative to growth, the stock price undoubtedly bakes in high expectations. Any sign of growth slowdown – say data center sales “only” grow 20% one quarter, or gross margin ticks down – could spur a sharp correction in the stock as momentum investors flee. In 2022, Nvidia’s stock fell hard when crypto-related demand vanished and gaming GPU sales slumped; it’s not immune to cycles. If the AI spending cycle pauses (for example, enterprises digest the hardware they bought before ordering more), Nvidia’s numbers could surprise to the downside. Moreover, as macro conditions tighten (higher interest rates, etc.), high-multiple stocks can see outsized volatility. We think Nvidia’s earnings momentum will outweigh these factors, but sentiment can swing violently in the short run. Investors should be prepared for potential 20-30% drawdowns on any whiff of bad news – it’s the price of owning a high-flyer.
Execution Risk: Finally, Nvidia must continue to execute well on product development. As it ventures into new areas like CPUs (the Vera CPU) and ever-more complex systems, there’s execution risk. If Rubin were to hit delays or underdeliver on performance, it could give competitors an opening. Nvidia’s track record is excellent, but as products become more intricate (6 different chips in a Rubin supercomputer, as Huang noted), the challenge grows. We’ll be watching how Nvidia manages its sprawling roadmap.
Despite these risks, we believe Nvidia’s overall risk/reward remains attractive. The company has navigated past disruptions (e.g., crypto boom/bust in GPUs) and emerged stronger. Its balance sheet is fortress-like (over $16B cash net of debt) so it can withstand shocks or invest heavily if needed. In our view, none of the identified risks fundamentally undermine the long-term thesis – they are factors to monitor, not reasons to abandon ship.
Final Thoughts
Nvidia has firmly established itself as the AI chip champion, and it continues to fire on all cylinders. The latest earnings confirm that the company is benefiting immensely from the voracious demand for AI computation, and management’s commentary reinforces that we are still in the early chapters of this story. With the Blackwell rollout driving record results now and the Rubin platform preparing to drive the next wave, Nvidia’s growth pipeline looks as strong as ever.
Yes, the stock has had a massive run, and yes, one must always be cautious chasing a high-flyer. But in Nvidia’s case, we see a company delivering the kind of fundamental performance that justifies the hype. It’s not a meme stock or a speculative concept – it’s a highly profitable, cash-generating market leader with clear visibility into its future product roadmap and ample avenues to expand (into new markets like cloud services, CPUs, and more). Our conviction remains high that Nvidia will continue to dominate the AI hardware space for the foreseeable future, barring some black swan event.
In short, we restate our bullish thesis: Nvidia is the unparalleled leader in a secular growth domain (AI infrastructure) that is transforming every industry. The company’s blend of visionary product development, execution excellence, and financial might make it a unique asset in any portfolio. We reaffirm our Buy rating on NVDA and would view any significant pullbacks as opportunities. While the stock may not be “cheap” in the traditional sense, the combination of rapid earnings growth and Nvidia’s entrenched competitive position makes it a compelling long-term investment, in our view. We’re strapping in with Jensen & Co. for the next leg of this AI-driven journey – with Blackwell in hand and Rubin on the way, the future looks as bright as ever for Nvidia.
Buy – Reiterated confidently.
Important Disclosure
This article is provided for informational and educational purposes only and should not be construed as an offer, solicitation, or recommendation to buy or sell any securities or investment products. The views and opinions expressed are those of Henriot Investment Management Ltd and do not constitute investment advice or a guarantee of future performance. Investors should conduct their own due diligence or consult a qualified financial advisor before making any investment decisions. Past performance is not indicative of future results. Henriot Investment Management Ltd is not a licensed securities dealer, broker, U.S. investment advisor, or investment bank.
We may initiate, modify, or close a position in the shares of NVDA—whether through stock ownership, options, or other derivatives—within 100 hours of publication, without further notice. Our positions are subject to change at any time based on market developments or revisions to our investment outlook.




Comments