No description
Find a file
2024-12-04 23:19:29 -05:00
client make rendering abstract (so that a webgl renderer will actually be possible) 2024-09-09 06:19:20 -04:00
server video: Clean up the encoder thread a bit 2024-12-04 23:19:29 -05:00
.gitignore gpu-only encoding works now yay 2024-10-14 09:37:20 -04:00
README.md server: cleanups 2024-10-10 00:56:21 -04:00

vncstream

Prototype of collabvm 3.0 (2.0. yep i get the irony)'s new h.264 video streaming on the client and server.

Some changes will ultimately be made before this is ever thought about being integrated. They are listed below.

Server side changes (probably)

  • full-HW encode (via a hw frame context)

    • Investigate if EGL device platform contexts can be used. In theory if it works with CUDA's OpenGL interop (.. don't see why it wouldn't) we can just share a
  • Code cleanup

    • also maybe nal SPS rewriting (stolen from webrtc) to force 1:1 decoding, although it seems that it's fine enough most of the time...
    • maybe pull it out into its own crate instead of it being fairly tightly packed
  • output a LOC-like container that can hold h.264 NAL packets, opus packets, or both ""interleaved"" into a single container entry

    • the client will parse this as well

Client

  • Warn for webcodecs not being supported

  • Code cleanup

    • Maybe the video playing code could even be pulled out into its own thing?
  • WebSockets probably will not be used because they blow

    • WebSocketStream "helps" by getting rid of the even bigger elephant of the room (backpressure, which is a "fun" feature of the originally standardized DOM api), but the reality is TCP head of line blocking and many other issues just mean that anything TCP will be meh at best and very paltry at worst.
    • MoQ over WebTransport is probably the way to go anyways, although if we diverge we should standardize an webtransport subprotocol to communicate that we're different (and agree on it everywhere)