
Richard Henry

Sam Pritzker
In the new era of AI infrastructure, CMOS scaling remains the workhorse for heavy computational workloads. But the need for an energy-efficient solution imposes a paradigm shift at the interconnect level, requiring an intimate 3D co-integration of advanced ASICs and optical connectivity.
As the architectural complexity of new products increases, relying on state-of-the-art platforms, with a short path to manufacturing. In this workshop, we will highlight how you can access following technologies for your future products:
- Advanced-node ASIC down to TSMC N2
- Imec’s integrated photonics platforms from 200G up to co-packaged optics
- Imec’s advanced 3D packaging technique from interposer to hybrid bonding
Location: Room 207
Duration: 1 hour

Philippe Soussan
Philippe Soussan is Technology Portfolio Director at imec. For 20 years, he has held different positions in R&D management at imec in the field of sensors, photonics, and 3D packaging. Addressing these technologies from R&D up to manufacturing levels.
His expertise lies in wafer-scale technologies, and he has authored over 100 publications and holds more than 20 patents in these fields.
Since 2024, Philippe has been in charge of strategy definition within the “IC-link by imec” sector. This imec business line provides access to design and manufacturing services in the most advanced ASIC and specialty technologies.
IMEC
Website: https://www.imec-int.com/en
Imec is a world-leading research and innovation center in nanoelectronics and digital technologies. Imec leverages its state-of-the-art R&D infrastructure and team of more than 5,500 employees and top researchers for R&D in advanced semiconductor and system scaling, silicon photonics, artificial intelligence, beyond 5G communications, and sensing technologies.
As imec’s application-specific IC (ASIC) division, imec.IC-link serves start-ups, SMEs, OEMs, and universities by supporting the full ASIC development process from design and IP services to production, packaging, and testing. It realizes over 600 yearly tape-outs in CMOS, Gan-on-SOI, and silicon photonics technologies.
In this session, we will explore the end-to-end workflow of managing foundation model (FM) development on Amazon SageMaker HyperPod. Our discussion will cover both distributed model training and inference using frameworks like PyTorch and KubeRay. Additionally, we will dive into operational aspects, including system observability and resiliency features for scale and cost-performance using Amazon EKS on SageMaker HyperPod. By the end of this hands-on session, you will gain a robust understanding of training and deploying FMs efficiently on AWS. You will learn to leverage cutting-edge techniques and tools to ensure high performance, reliable, and scalable FM development.
Location: Room 206
Duration: 1 hour

Mark Vinciguerra

Aravind Neelakantan

Aman Shanbhag
Aman Shanbhag is a Specialist Solutions Architect on the ML Frameworks team at Amazon Web Services (AWS), where he helps customers and partners with deploying ML training and inference solutions at scale. Before joining AWS, Aman graduated from Rice University with degrees in computer science, mathematics, and entrepreneurship.
AWS
Website: https://aws.amazon.com/
Since 2006, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud. AWS has been continually expanding its services to support virtually any workload, and it now has more than 240 fully featured services for compute, storage, databases, networking, analytics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, media, and application development, deployment, and management from 114 Availability Zones within 36 geographic regions, with announced plans for 12 more Availability Zones and four more AWS Regions in New Zealand, the Kingdom of Saudi Arabia, Taiwan, and the AWS European Sovereign Cloud. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—trust AWS to power their infrastructure, become more agile, and lower costs. To learn more about AWS, visit aws.amazon.com.

Anna Doherty
The rapid evolution of high-performance computing (HPC) clusters has been instrumental in driving transformative advancements in AI research and applications. These sophisticated systems enable the processing of complex datasets and support groundbreaking innovation. However, as their adoption grows, so do the critical security challenges they face, particularly when handling sensitive data in multi-tenant environments where diverse users and workloads coexist. Organizations are increasingly turning to Confidential Computing as a framework to protect AI workloads, emphasizing the need for robust HPC architectures that incorporate runtime attestation capabilities to ensure trust and integrity.
In this session, we present an advanced HPC cluster architecture designed to address these challenges, focusing on how runtime attestation of critical components – such as the kernel, Trusted Execution Environments (TEEs), and eBPF layers – can effectively fortify HPC clusters for AI applications operating across disjoint tenants. This architecture leverages cutting-edge security practices, enabling real-time verification and anomaly detection without compromising the performance essential to HPC systems.
Through use cases and examples, we will illustrate how runtime attestation integrates seamlessly into HPC environments, offering a scalable and efficient solution for securing AI workloads. Participants will leave this session equipped with a deeper understanding of how to leverage runtime attestation and Confidential Computing principles to build secure, reliable, and high-performing HPC clusters tailored for AI innovations.
Location: Room 201
Duration: 1 hour

Jason Rogers
Jason Rogers is the Chief Executive Officer of Invary, a cybersecurity company that ensures the security and confidentiality of critical systems by verifying their Runtime Integrity. Leveraging NSA-licensed technology, Invary detects hidden threats and reinforces confidence in an existing security posture. Previously, Jason served as the Vice President of Platform at Matterport, successfully launched a consumer-facing IoT platform for Lowe's, and developed numerous IoT and network security software products for Motorola.

Ayal Yogev
Confidential Computing Consortium
Website: https://confidentialcomputing.io/
The Confidential Computing Consortium is a community focused on projects securing data in use and accelerating the adoption of Confidential Computing through open collaboration.
The Confidential Computing Consortium (CCC) brings together hardware vendors, cloud providers, and software developers to accelerate the adoption of Trusted Execution Environment (TEE) technologies and standards.
CCC is a project community at the Linux Foundation dedicated to defining and accelerating the adoption of Confidential Computing. It embodies open governance and open collaboration that has aided the success of similarly ambitious efforts. The effort includes commitments from numerous member organizations and contributions from several open source projects.
Dive into a hands-on workshop designed exclusively for AI developers. Learn to leverage the power of Google Cloud TPUs, the custom accelerators behind Google Gemini, for highly efficient LLM inference using vLLM. In this trial run for Google Developer Experts (GDEs), you'll build and deploy Gemma 3 27B on Trillium TPUs with vLLM and Google Kubernetes Engine (GKE). Explore advanced tooling like Dynamic Workload Scheduler (DWS) for TPU provisioning, Google Cloud Storage (GCS) for model checkpoints, and essential observability and monitoring solutions. Your live feedback will directly shape the future of this workshop, and we encourage you to share your experience with the vLLM/TPU integration on your social channels.
Location: Room 207
Duration: 1 hour

Niranjan Hira
As a Product Manager in our AI Infrastructure team, Hira looks out for how Google Cloud offerings can help customers and partners build more helpful AI experiences for users. With over 30 years of experience building applications and products across multiple industries, he likes to hog the whiteboard and tell developer tales.
Google Cloud provides leading infrastructure, platform capabilities, and industry solutions. We deliver enterprise-grade cloud solutions that leverage Google’s cutting-edge technology to help companies operate more efficiently and adapt to changing needs, giving customers a foundation for the future. Customers in more than 150 countries use Google Cloud as their trusted partner to solve their most critical business problems.

Julianne Kur
Location: Room 206
Duration: 1 hour
DataBank
Website: https://www.databank.com/
DataBank helps the world’s largest enterprises, technology, and content providers ensure their data and applications are always on, always secure, always compliant, and ready to scale to meet the needs of the artificial intelligence era. Recognized by Deloitte in 2023 and 2024, and Inc. 5000 in 2024 as one of the fastest-growing private US companies, DataBank’s edge colocation and infrastructure footprint consists of 65+ HPC-ready data centers in 25+ markets, 20 interconnection hubs, and on-ramps to an ecosystem of cloud providers with virtually unlimited reach.