← Return to program

Embeddings: How Computers Learned to Read

Saturday 10:50 AM–11:20 AM in Eureka 2

As large language models take over the world, we’re now working alongside machines that can read, write and converse – coding with CoPilot, chatting with ChatGPT and drawing with DALL-E. But how do machines, which fundamentally operate on binary code, achieve such remarkable feats? The answer lies in embeddings. Embeddings allow us to represent complex data - whether it's text, images, or even abstract concepts - as dense vectors of numbers. In this presentation, we'll demystify embeddings and give you a practical and intuitive understanding of how they work.

See this talk and many more by getting your ticket to PyCon AU now!

I want a ticket!

Artificial Intelligence, Large Language Models, and Machine Learning have revolutionized our ability to automate complex tasks that once required significant human time and effort. But how do machines, which fundamentally operate on binary code, achieve such remarkable feats? The answer lies in embeddings - a powerful concept at the heart of modern AI. Embeddings are the bridge between human-understandable information and the numerical language of computers. They allow us to represent complex data - whether it's text, images, or even abstract concepts - as dense vectors of numbers. In this presentation, we'll demystify embeddings and give you a practical and intuitive understanding of how they work.

We'll dive into:<br /> 1. What are embeddings and how they enable machines to process and understand human language 2. How you can create your own embeddings or utilise existing embedding models to encode language in Python 3. How embeddings underpin LLMs

By the end, you'll have a solid grasp of this fundamental AI concept and be equipped to start experimenting with embeddings in your own projects.

Liam Bluett

I've been working in data science/analytics for around 3 years now, my tool of choice: Python.