

This is the point where "a single command" stops being sufficient (although, spoilers, we did manage to get by with a single command), and something extra is required: some intermediate agent that could transform the screensharing content captured via FFmpeg into a WebRTC stream. This way might be suitable for static content, but it will hardly be enough for a dynamic stream. But what next? What if we need, for example, to broadcast the screen to a website? We would have to record the video, upload it to a server and then play it on the website.

In FFmpeg, with a single command we can record what is happening on screen into a file. In the comment sections of our articles about our server there are often users who say: "Why would you jump through so many hoops, when you can do the same with a single line of code in FFmpeg!?"
