Charla

Viernes 14 · 12:30h - 13:20h
Track 3

Everything You Need to Know About Running LLMs Locally

As large language models (LLMs) become more accessible, running them locally unlocks exciting opportunities for developers, engineers, and privacy-focused users. Why rely on costly cloud AI services that share your data when you could deploy your own models tailored to your needs? In this session, we’ll dive into the advantages of local LLM deployment, from selecting the right open source model to optimizing performance on consumer hardware and integrating with your unique data. Let’s explore the journey to your own local stack for AI, and cover the important technical details such as model quantization, API integrations with IDE code assistants, and advanced methods like Retrieval-Augmented Generation (RAG) to connect your LLM to private data sources. Don’t miss out on the fun live demos that prove the bright future of open source AI is already here!

/Dificultad

Fácil

/Idioma

Inglés

/Accesibilidad

Subtitulada

/Categorías

Inteligencia Artificial Open Source / Free Software Tecnologías emergentes

Cedric
Clyburn

Senior Developer Advocate

Red Hat

Senior Developer Advocate at Red Hat, he is an enthusiastic software technologist with a background in Kubernetes, AI, and container tools. Based out of New York City!

Roberto
Carratalá

I’m Roberto Carratalá, an IT professional with more than ten years of experience in the industry of Linux, Virtualization, Cloud, DevOps and Kubernetes/OpenShift, with a MsC in AI/ML.