Hi Jeremy,
First of all I want to thank you for your effort and contributions to Licode. We really appreciate your work and your input. Get ready for a long answer
I understand your concerns regarding the actual state of recording, as it is, Licode does not implements the mechanisms you mention to manage packet loss. This is due to lack of development effort but also there are some design decisions involved.
It doesn’t handle packet loss correctly; it should be sending a NACK on a lost packet on the RTCP back-channel.
It doesn’t handle FEC correctly (there’s an entire packet type of 117 that it licode is droppoing on the floor).
Licode allows recording, yes. But it is primarily designed to be used in a videoconferencing scenario. We give a low priority to implementing FEC handling and RTCP reporting because Chrome clients (subscribers), DO implement this, and Licode makes sure this RTCP messages (NACK, REMB, etc.) ARE forwarded (in a quite brute way at this point, we plan on improving this).
Publisher ————(A)———> Licode ———(B) ———> subscriber.
For the communication, it is important to control packet loss both in A and B. With our limited resources, we decided it was better to pass RTCP packets from the subscriber instead of generating a new RTCP stream from Licode.
This, of course, means that if there are no subscribers in the session, no packet loss will be reported. That is the case of a record-only session. Also, as you point out, we discard red data and fec packets WHEN recording, we know that is quite far from optimal. Do you have other subscribers in your recording session?
Another difference with recording is that you need periodic keyframes to be able to seek within a recorded video, we manually ask for them via FIR (Frame-intra-request) messages.
This approach has provided very good communication experiences in our projects and decent-to-good recording quality.
On how to proceed: As I mentioned, we want to add more intelligence to Licode regarding RTCP. I would definitely improve ExternalOutput and the (now almost totally yours) RtpQueue so they can report packets loss, use redundancy packets and so on.
We don’t want to use Google’s library in Licode’s core for a variety of engineering, scientific, socio-political and philosophical reasons. We prefer to increasingly improve what we have according to the standards while being compatible with Google.
However, there is another option. You can use Google’s webRTC library for recording without altering Licode, you can create a stand-alone webRTC client, implement the negotiation as in erizoClient and implement recording there (possibly by reusing our ExternalOutput code if you manage to get the encoded frames). That would require, IMO less work that replacing Erizo’s WebRTC stack and you could still use Licode as the MCU for room management, etc. In fact, I believe that there are similar projects out there.
Finally, I would like to ask you to send me an example of a corrupted video so I can see exactly the kind of corruption you are referring to.
Cheers! –
Pedro Rodriguez
On 15 Sep 2014 at 22:11:46, Jeremy Noring (jnoring@hirevue.com) wrote:
I’ve been working on recording a lot in licode, and I’ve come to two conclusions:
It doesn’t handle packet loss correctly; it should be sending a NACK on a lost packet on the RTCP back-channel.
It doesn’t handle FEC correctly (there’s an entire packet type of 117 that it licode is droppoing on the floor).
WebRTC itself seems to use a hybrid approach to deal with packet loss where they’ll use retransmits if the latency is low enough and FEC to provide stream-level redundancy. I know this approach must work well because I rarely (never?) see visible corruption in a WebRTC video stream, even with significant packet loss and latency in the local network connection. Unfortunately in my recorded content, corruption is fairly common and often pretty bad.
To solve this…I’m thinking the best approach may be to actually use libjingle-peerconnection in licode to handle RTP packet reordering on the video stream. This would involve sucking down the entire WebRTC project, building it, linking it against licode, stripping out the RTP packet queue I recently re-wrote, etc. It’s a ton of work. But I think it’d still be dramatically less work than re-writing the RTP/RTCP/NACK/FEC code in WebRTC, which is part of a half-dozen RFCs that definitely not trivial to implement. So I wanted to get the opinion of others here on this approach and see if there’s any alternate proposals?
I really, really need video recording that is largely free of corruption. So I have to do something. Any advice is hugely appreciated.
-Jeremy
You received this message because you are subscribed to the Google Groups “lynckia” group.
To unsubscribe from this group and stop receiving emails from it, send an email to lynckia+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.