Astera Labs, Inc. Common Stock (NASDAQ:ALAB) Q1 2024 Earnings Call Transcript - InvestingChannel

Astera Labs, Inc. Common Stock (NASDAQ:ALAB) Q1 2024 Earnings Call Transcript

Astera Labs, Inc. Common Stock (NASDAQ:ALAB) Q1 2024 Earnings Call Transcript May 8, 2024

Astera Labs, Inc. Common Stock isn’t one of the 30 most popular stocks among hedge funds at the end of the third quarter (see the details here).

Operator: Thank you for standing by. My name is Regina, and I will be your conference operator today. At this time, I would like to welcome everyone to the Astera Labs First Quarter 2024 Earnings Conference Call. All lines have been placed on mute to prevent any background noise. After management remarks, there will be a question-and-answer session. [Operator Instructions] I will now turn the call over to Leslie Green, Investor Relations for Astera Labs. Leslie, you may begin.

Leslie Green: Thank you, Regina. Good afternoon, everyone, and welcome to the Astera Labs first quarter 2024 earnings call. Joining us today on the call are Jitendra Mohan, Chief Executive Officer and Co-Founder; Sanjay Gajendra, President, Chief Operating Officer and Co-Founder; and Mike Tate, Chief Financial Officer. Before we get started, I would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, expected future financial results, strategies and plans, future operations and the markets in which we operate. These forward-looking statements reflect management’s current beliefs, expectations and assumptions about future events, which are inherently subject to risks and uncertainties that are discussed in detail in today’s earnings release and in the periodic reports and filings we file from time to time with the SEC, including the risks set forth in the final perspective relating to our IPO.

It is not possible for the company’s management to predict all risks and uncertainties that could have an impact on these forward-looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statement. In light of these risks, uncertainties and assumptions, the results, events or circumstances reflected in the forward-looking statements discussed during this call may not occur and actual results could differ materially from those anticipated or implied. All of our statements are made based on information available to management as of today, and the company undertakes no obligation to update such statements after the day of this call to conform to these as a result of new information, future events or changes in our expectations, except as required by law.

A laboratory technician in a safety suit analyzing and conducting a series of experiments.

Also during this call, we will refer to certain non-GAAP financial measures, which we consider to be an important measure of the company’s performance. These non-GAAP financial measures are provided in addition to and not as a substitute for or superior to financial results prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed through the Investor Relations portion of our website and will also be included in our filings with the SEC, which will also be accessible through the Investor Relations portion of our website. With that I would like to turn the call over to Jitendra Mohan, CEO of Astera Labs.

Jitendra?

Jitendra Mohan: Thank you, Leslie. Good afternoon, everyone, and thanks for joining our first earnings conference call as a public company. This year is off to a great start with Astera Labs seeing strong and continued momentum along with the successful execution of our IPO in March. First and foremost, I would like to thank our investors, customers, partners, suppliers and employees for their steadfast support over the past six years. We have built Astera Labs from the ground up to address the connectivity bottlenecks to unlock the full potential of AI in the cloud. With your help, we’ve been able to scale the company and deliver innovative technology solutions to the leading hyperscalers and AI platform providers worldwide.

But our work is only just beginning. We are supporting the accelerated pace of AI infrastructure deployments with leading hyperscalers by developing new product categories, while also exploring new market segments. Looking at industry reports over the past several weeks, it is clear that we remain in the early stages of a transformative investment cycle by our customers to build out the next generation of infrastructure that is needed to support their AI roadmaps. According to recent earning reports, on a consolidated basis, CapEx spend during the first quarter for the four largest U.S. hyperscalers grew by roughly 45% year-on-year to nearly $50 billion. Qualitative commentary implies continued quarterly growth in CapEx for this group through the balance of the year.

This is truly an exciting time for technology innovators within the cloud and AI infrastructure market, and we believe Astera Labs is well position to benefit from these growing investment trends. Against the strong industry backdrop, Astera Labs delivered strong Q1 results with record revenue, strong non-GAAP operating margin, positive operating cash flows, while also introducing two new products. Our revenue in Q1 was $65.3 million up 29% from the previous quarter and up 269% from the same period in 2023. Non-GAAP operating margin was 24.3%, and we delivered $0.10 of pro forma non-GAAP diluted earnings per share. I will now provide some commentary around our position in this rapidly evolving AI market. Then I will turn the call over to Sanjay to discuss new products and our growth strategy.

Finally, Mike will provide additional details on our Q1 results and our Q2 financial guidance. Complex AI model sizes continue doubling about every six months, fueling the demand for high performance AI platforms running in the cloud. Modern GPUs and AI accelerators are phenomenally good at compute, but without equally fast connectivity, they remain highly underutilized. Technology innovation within the AI Accelerator market has been moving forward at an incredible pace and the number and variety of architectures continues to expand to handle trillion parameter models, while improving AI infrastructure utilization. We continue to see our hyperscaler customers utilize the latest merchant GPUs and proprietary AI accelerators to compose unique data center scale AI infrastructure.

However, no two clouds are the same. The major hyperscalers are architecting their systems to deliver maximum AI performance based on the specific cloud infrastructure requirements, from power and cooling to connectivity. We are working alongside our customers to ensure these complex and different architectures achieve maximum performance and operate reliably even as data rates continue to double. As the systems continue to move data faster and grow in complexity, we expect to see our average dollar content per AI platform increase and even more so with the new products we have in development. Our conviction in maintaining and strengthening our leadership position in the market is rooted in our comprehensive intelligent connectivity platform and our deep customer partnerships.

The foundation of our platform consists of semiconductor based and software-defined connectivity ICs, modules and boards, which all support our COSMOS software suite. We provide customers with a complete customizable solution, tips, hardware and software, which maximizes flexibility without performance penalties, delivers deep fleet management capabilities and matches space with the ever quickening product introduction cycles of our customers. Not only does COSMOS software run on our entire product portfolio, but it is also integrated within our customers’ operating stacks to deliver seamless customization, optimization and monitoring. Today, Astera Labs is focused on three core technology standards: PCI Express, Ethernet and Compute Express Link.

We’re shipping three separate product families, all generating revenue and in various stages of adoption and deployment supporting these different connectivity protocols. Let me touch upon each of these critical data center connectivity standards and how we support them with our differentiated solutions. First, PCI Express. PCIe is the native interface on all AI accelerators, TPUs and GPUs, and is the most prevalent protocol for moving data at high bandwidth and low latency inside servers. Today, we see PCIe Gen 5 getting widely deployed in AI servers. These AI servers are becoming increasingly complex. Faster signal speeds in combination with complex server topologies are driving significant signal integrity challenges. To help solve these problems, our hyperscalers and AI accelerator customers utilize our PCIe Smart DSP Retimers to extend the reach of PCIe Gen 5 between various components within heterogeneous compute architecture.

Our Aries product family represents the gold standard in the industry for performance, robustness and flexibility, and is the most widely deployed solution in the market today. Our leadership position with millions of critical data links running through our Aries Retimers and our COSMOS software enables us to do something more, become the eyes and ears to monitor the connectivity infrastructure and help fleet managers ensure their AI infrastructure is operating at fleet utilization. Deep diagnostics and monitoring capabilities in our chips and extensive fleet management features in our COSMOS software, which are deployed together in our customer’s fleet has become a material differentiator for us. Our COSMOS software provides the easiest and fastest path to deploy the next generation of our devices.

We see AI workloads and newer GPUs driving the transition from PCIe Gen 5 running at 32 gigabits per second per lane to PCIe Gen 6 running at 64 gigabits per second per lane. Our customers are evaluating our Gen 6 solutions now, and we expect them to make design decisions in the next six to nine months. In addition, while we see our Aries devices being heavily deployed today for interconnecting AI accelerators with CPUs and networking, we also expect our Aries devices to play an increasing role in backend fabrics, interconnecting AI Accelerators to each other in AI clusters. Next, let’s talk about Ethernet. Ethernet protocol is extensively deployed to build large scale networks within data centers. Today, Ethernet makes up the vast majority of connections between servers and top of rack switches.

Driven by AI workloads’ insatiable need for speed, Ethernet data rates are doubling roughly every two years, and we expect the transition from 400 gig Ethernet to 800 gig Ethernet to take place later in 2025. 800 gig Ethernet is based on 100 gigabits per second per lane signaling rate, which is facing tremendous pressure on conventional passive cabling solutions. Like our PCIe Retimers, our portfolio of Taurus Ethernet Retimers helps relieve these connectivity bottlenecks by overcoming the reach, signal integrity and bandwidth issues by enabling robust 100 gig per lane connectivity over copper. Unlike our Aries portfolio, which is largely sold in a chip format, we sell our Taurus portfolio largely in the form of smart cable modules that are assembled into active electrical cables by our cable partners.

This approach allows us to focus on our strength and fully leverage our COSMOS software suite to offer customization, easy qualification, deep telemetry and field upgrade to our customers. At the same time, this model enables our cable partners to continue to excel at bringing the best cabling technology to our common end customers. We expect 400 deployments based on our Taurus smart cable modules to begin to ramp in the back half of 2024. We see the transition to 800 gig Ethernet starting to happen in 2025, resulting in broad demand for AECs to both scale up and scale out AI infrastructure and strong growth for our Taurus Ethernet Smart Cable module portfolio over the coming years. Last is Compute Express Link or CXL. CXL is a low latency cash coherent protocol, which runs on top of PCIe protocol.

CXL provides an open standard for disaggregating memory from compute. CXL allows you to balance the memory bandwidth and capacity requirements independently from compute requirements, resulting in better utilization of compute infrastructure. Over the next several years, data center platform architects plan to utilize CXL technology to solve memory bandwidth and capacity bottlenecks that are being exacerbated by the exponential increase in compute capability of CPUs and GPUs. Major hyperscalers are actively exploring different application of CXL memory expansion. While the adoption of CXL technology is currently in its infancy, we do expect to see increased deployments with the introduction of next generation CXL capable datacenter server CPUs such as Granite Rapids, Turing and others.

Our first to market portfolio of Leo CXL memory connectivity controllers is very well positioned to enable our customers to overcome memory bottlenecks and deliver significant benefits to their end customers. We have worked closely with our hyperscaler customers and CPU partners to optimize our solution to seamlessly deliver these benefits without any application level software changes. Furthermore, we have used our COSMOS software to include significant learnings we have had over the last 18 months and to customize our Leo memory expansion solution to the different requirements from each hyperscaler. We anticipate memory expansion will be the first high volume use case that will drive design wins into volume production in 2025 timeframe. We remain very excited about the potential of CXL in datacenter applications and believe that most new CPUs will support CXL and hyperscalers will increasingly deploy innovative solutions based on CXL.

With that, let me turn the call over to our President and COO, Sanjay Gajendra, to discuss some of our recent product announcements and our long-term growth strategy.

Sanjay Gajendra: Thanks, Jitendra, and good afternoon, everyone. Astera Labs is well positioned to demonstrate long-term growth through a combination of three factors. One, we have a strong secular tailwinds with increased AI infrastructure investment. Two, the next generation of products within existing product lines are gaining traction. And third, the introduction of new product lines. Over the past three months, we announced two new and significant products that play an important role in enabling next generation AI platforms and provide incremental revenue opportunities as early as the second half of 2024. First, we expanded our widely deployed field proven Aries Smart DSP Retimers portfolio with the introduction and public demonstration of our Aries 6 PCIe Retimer that delivers robust, low power PCIe Gen 6 and CXL 3 connectivity between next generation GPUs, AI accelerators, CPUs, NICs, and CXL memory controllers.

Aries 6 is the third generation of our PCIe Smart Retimer portfolio and provides the bandwidth required to support data intensive AI workloads while maximizing utilization of next generation GPUs operating at 64 gigabit per second per link. Fully compatible with our field deployed COSMOS software suite, Aries 6 incorporates the tribal knowledge we have acquired over the past four years by partnering and enabling hyperscadeless to deploy AI infrastructure in the cloud. Aries 6 also enables the seamless upgrade path from current PCIe Gen 5 based platforms to next generation PCIe Gen 6 based platforms for our customers. With Aries 6, we demonstrated industry’s lowest power at 11 watts at Gen 6 in full 16 lane configuration running at 64 gigabit per second per lane, significantly lower than our competitors and even lower than our own Aries Gen 5 Retimer.

Through collaboration with leading providers of GPUs and CPUs such as AMD, ARM, Intel, and NVIDIA, Aries 6 is being rigorously tested at Astera’s Cloud-Scale Interop Lab and in customers’ platforms to minimize interoperation risk, lower system development cost, and reduce time to market. Aries 6 was demonstrated at NVIDIA’s GTC event during the week of March 18th. Aries 6 is currently sampling two leading AI and cloud infrastructure providers, and we expect initial volume ramps to begin in 2025. We also announced the introduction and sampling of our Aries PCIe and CXL Smart Cable Modules for Active Electrical Cables or AECs to support robust and long reach, up to 7 meters copper cable connectivity. This is 3x the standard reach defined in the PCIe spec.

Our new PCIe AEC solution is design for GPU clustering application by extending PCIe backend fabric deployments to multiple racks. This new Aries product category expands our market opportunity from within the rack to across racks. As with our entire product portfolio, Aries Smart Cable Modules support our COSMOS software suite to deliver a powerful yet familiar array of link monitoring, fleet management and rack tools which are customizable for diverse needs of our hyperscaler customers. We leveraged our expertise in silicon, hardware and software to deliver a complete solution in record time and we expect initial shipments to begin later this year for the PCIe AECs. We believe this new Aries product announcement represents another concrete example of Astera Labs driving the PCIe ecosystem with technology leadership with an intelligent connectivity platform that includes silicon chips, hardware modules and COSMOS software suite.

Over the coming quarters, we anticipate ongoing generational product upgrades to existing product lines and introduction of new product categories developed from the ground up to fully utilize the performance and productivity capabilities of generative AI. In summary, over the past few years, we have built a great team that is delivering technology that is foundational to deploying AI infrastructure at scale. We have gained the trust and support of our world class customer base by executing, innovating and delivering to our commitments. These tight relationships are resulting in new product developments and enhanced technology roadmap for Astera. We look forward to continue collaboration with our partners as a new era unfolds driven by AI applications.

With that, I will turn the call over to our CFO, Mike Tate, who will discuss our Q1 financial results and Q2 outlook.

Mike Tate: Thanks, Sanjay, and thanks to everyone for joining. This overview of our Q1 financial results and Q2 guidance will be on a non-GAAP basis. The primary difference in Astera Labs non-GAAP metrics is stock-based compensation and the related income tax effects. Please refer to today’s press release available on the Investor Relations section of our website for more details on both our GAAP and non-GAAP Q2 financial outlook as well as a reconciliation of our GAAP to non-GAAP financial measures presented on this call. For Q1 of 2024, Astera Labs delivered record quarterly revenue of $65.3 million which was up 29% versus the previous quarter and 269% higher than the revenue in Q1 of 2023. During the quarter, we shipped products to all the major hyperscalers and AI accelerator manufacturers.

We recognized revenues across all three of our product families during the quarter with Aries products being the largest contributor. Aries enjoyed solid momentum in AI based platforms as customers continue to introduce and ramp their PCIe Gen 5 capable AI systems, along with overall strong unit growth with the industry’s growing investment in generative AI. Also, we continue to make good progress with our Taurus and Leo product lines, which are in the early stages of revenue contribution. In Q1, Taurus revenues were primarily shipping into 200 gig Ethernet based systems, and we expect Taurus revenues to sequentially track higher as we progress through 2024, as we also begin to ship into 400 gig Ethernet based systems. Q1 Leo revenues were largely from customers purchasing pre-products volumes for their development of their next generation CXL capable compute platforms expected to launch late this year with the next server CPU refresh cycle.

Q1 non-GAAP gross margins was 78.2% and was up 90 basis points compared with 77.3% in Q4 2023. The positive gross margin performance during the quarter was driven by healthy product mix. Non-GAAP operating expenses for Q1 were $35.2 million up from $27 million in the previous quarter. With non-GAAP operating expenses, R&D expense was $22.9 million, sales and marketing expense was $6 million and general and administration expenses were $6.3 million. Non-GAAP operating expenses during Q1 increased largely due to a combination of increased headcount and incremental costs associated with being a public company. The largest delta between non-GAAP and GAAP operating expenses in Q1 was stock-based compensation recognized in connection with our recent IPO and its associated employer payroll taxes and to a lesser extent our normal quarterly stock-based compensation expense.

Non-GAAP operating margins for Q1 was 24.3% as revenues scaled in proportion with our operating expenses on a sequential basis. Interest income in Q1 was $2.6 million. Our non-GAAP tax provision was $4.1 million for the quarter, which represents a tax rate of 22% on a non-GAAP basis. Pro forma non-GAAP fully diluted share count for Q1 was 147.5 million shares. Our pro forma non-GAAP diluted earnings per share for the quarter was $0.10. The pro forma non-GAAP diluted shares includes the assumed conversion of our preferred stock for the entire quarter, while our GAAP share count only includes a conversion of our preferred stock for the step period following our March IPO. Going forward, given that all the preferred stock has now been converted to common stock upon our IPO, those preferred shares will be fully included in the share count for both GAAP and non-GAAP.

Cash flow from operating activities for Q1 was $3.7 million and we ended the quarter with cash, cash equivalents and marketable securities of just over $800 million. Now turning to our guidance for Q2 of fiscal 2024. We expect Q2 revenues to increase from Q1 levels within a range of 10% to 12% sequentially. We believe our Aries product family will continue to be the largest component of revenue and will be the primary driver of sequential growth in Q2. Within the Aries product family, we expect the growth to be driven by increased unit demand for AI servers as well as the ramp of new product designs with our customers. We expect non-GAAP gross margins to be approximately 77% given a modest increase in hardware shipments relative to standalone ICs. We believe as our hardware solutions grow as a percentage of revenue over the coming quarters, our gross margins will begin to trend towards our long-term gross margin model of 70%.

We expect non-GAAP operating expenses to be approximately $40 million as we remain aggressive in expanding our R&D resource pool across headcount and intellectual property, while also scaling our back office functions. Interest income is expected to be $9 million. Our non-GAAP tax rate should be approximately 23% and our non-GAAP fully diluted share count is expected to be approximately 180 million shares. Adding this all up, we are expecting non-GAAP fully diluted earnings per share of approximately $0.11. This concludes our prepared remarks. Once again, we very much appreciate everyone joining the call. And now we’ll open the line for questions. Operator?

See also 20 Biggest Grain Exporting Countries in the World and 25 Most Dangerous Crime Lords in the World.

To continue reading the Q&A session, please click here.

Related posts

Advisors in Focus- January 6, 2021

Gavin Maguire

Advisors in Focus- February 15, 2021

Gavin Maguire

Advisors in Focus- February 22, 2021

Gavin Maguire

Advisors in Focus- February 28, 2021

Gavin Maguire

Advisors in Focus- March 18, 2021

Gavin Maguire

Advisors in Focus- March 21, 2021

Gavin Maguire