I haven't played with WebRTC yet, but my understanding is it's based on RTP over an unreliable transport, which I am familiar with. In my experience, this doesn't work as well as people say. > WebRTC largely prefers dropping packets to stay real-time I might have misunderstood something, but they do seem like they serve different purposes. MSE would be a way of capturing packets, but if the protocol doesn’t let you sequentially access bytes starting from a timestamp, you’re stuck when trying to resume a stream, no? If you pick WebRTC, you’ve no easy way to ask to pick up where you left off, because the default is just a stream of “real-time now” packets and dropped packets are lost forever. But if you might have latency and want to stream uninterrupted video footage, it’s a necessary evil. Now, if you’re not distributing requests at scale, HLS is indeed overkill. It’s not designed around real-time signalling, instead it’s designed around making requests for frames/bytes effectively sequentially. One is designed for calls (real-time) and one is not (streaming). WebRTC largely prefers dropping packets to stay real-time while HLS buffers and preserves every frame by default rather than drop frames to stay real-time. Seems to me that WebRTC and HLS solve two different problems though. > where I'd like a uniform way of handling playback and live I don't think WebRTC really does playback. I suppose the cool kids use WebRTC for live video instead, but my use case is an NVR, where I'd like a uniform way of handling playback and live. I'd rather try WebCodecs, but only Chrome supports it AFAIK. Even MSE is more complicated than I'd like. The whole HLS spec seems arcane compared to a protocol in which the server pushes data over a WebSocket. Or maybe they gave up on the idea? Dunno. Maybe Safari has a special hook in their HLS implementation that bypasses the polling delay if the media segment is pushed, but if so, I don't know where that's documented, and it's a surprising behavior. But there's no JS API for doing something as soon as a push is received. IIRC Chrome dropped support.) If the client later needs that URL, it loads more quickly. My general understanding of HTTP/2 push is that the clients implement it by adding the pushed data to their cache and nothing more. I'm really confused by the HTTP/2 push thing that article mentioned. My custom WebSocket protocol doesn't really do flow control right now, beyond what TCP does.) (I guess the silver lining there is that this is a flow control mechanism, so the server won't keep sending more data if the client isn't keeping up with previous data. That seems like a significant improvement, in that the client can do a hanging GET and have the server respond when the media segment exists, avoiding that transit latency. I expect folks are looking at this very closely, but the current protocol obviously lets the client be in full control over what gets sent, which would be somewhat lost if relying exclusively on HTTP2 server-initiated push… For example, clients can request separate caches of low and high bitrate media to always ensure they have something to play back in time… Mentions more details but I don’t have time to read them now. and doesn’t appear to mention push directly but does require HTTP2. (or as suggested in the post, provide hint URLs for where the next block will be…)Įdit: the full spec is at. This didn’t become a requirement for LL-HLS according to this blog post, but you could build the same push technique using HLS and HTTP/2 to push data to the client before it needs to ask for it. > As compared to a WebSocket where the server pushes segments as soon as they're available. Of course, this means everything would flow through Javascript, totally defeating what advantage HLS has and requiring (device-internal, fortunately) polling. In an application of mine, I've thought about writing web worker code to basically create a HLS API for my service from the real, WebSocket-based API. ![]() As compared to a WebSocket where the server pushes segments as soon as they're available. If you want lowest possible latency, well, it's 1.5 trips higher than necessary, as well as requiring extra network requests (costing battery for wake-ups and a bit of bandwidth/radio time). m3u8 manifest repeatedly, and then fetch the media segments it indicates. * On the other hand, doesn't HLS require polling and extra round trips? You fetch the. ![]() I can see it meaning an extra copy or two also. At least that means chunks must be garbage-collected. * On the one hand, the MSE path makes all the data flow through Javascript. But I'm not sure it's actually true that HLS is more optimal: ![]() Fair point I can see them making the choice for that reason.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |