Real-time video composition has a lot of use cases. Whether you are building software for events broadcasting, live streaming, or videoconferencing, you might want to merge multiple streams into one with some cool effects. That’s why over a year ago we set out on a mission to build software allowing you to do that, without the hustle of writing low-level code yourself. During the last year of research and development, we learned a lot on that topic and we want to share that knowledge with you. I’ll discuss the upsides and downsides of multiple technologies we researched (FFmpeg, OpenGL, Vulcan, and wgpu) and some crucial problems we encountered along the way. Lastly, I will talk about integrating the VideoCompositor into our application, to distribute WebRTC video calls via HLS to large audiences.