Nov-19 09:00-09:25

Large Language Models (LLMs) mark a transformative advancement in artificial intelligence. These models are trained on vast datasets comprising text and code, enabling them to handle complex tasks such as text generation, language translation, and interactive querying.

As LLMs continue to integrate into various applications ranging from chatbots and search engines to creative writing aids, the need to monitor and comprehend their behaviors intensifies.

Observability plays a crucial role in this context. It involves the systematic collection and analysis of data to enhance LLM performance, identify and correct biases, troubleshoot issues, and ensure AI systems are both reliable and trustworthy.

In this discussion, we will explore the concept of LLM observability in depth, including the initial LLM semantic convention which has just been adopted by the OpenTelemetry community, and how OpenTelemetry can fit into the world of LLM observability . Additionally, we will share more detail about how OpenLLMetry is leveraging OpenTelemetry to do LLM Observability for the whole AI stack, including vector DB, LLMs, Model Orchestration Platform etc.