Libnice / nicer to use tcp


#1

Hello guys!
One question: I noticed that erizo doesn’t send ice candidate to browser, but instead it insert 2 candidates lines in the offer;
these are only udp.

Digging into the c++ code, is possible to make erizo also generate some tcp candidates? If is it possible, can you address me on the right files?

I’m using licode also for broadcast and i think that tcp is way more reliable for a non real-time streaming.

I managed to get tcp working using TURN but the quality is meh.

Thanks guys!


#2

This is not, unfortunately, an answer to your question. More a question from me.

I would like to know why you want to use Licode for broadcasting. Personally I think that using something like Icecast2 is a much, much more viable solution.

With Icecast2 it is easy to set up a solution with proper load balancing and practically unlimited scalability. An example of a proven technology.

When you are providing a broadcast, then you expect proper ordering of packages and no loss of data. No loss of quality at all (use TCP, of course). But that is not suitable for real-time interaction. In real-time (low-latency) situations you just have to accept lost data and problems with quality (UDP does not promise that your packet reaches the destination in time).

Licode is, IMHO, very good when you are dealing with real-time (low latency) situations. But I would not consider Licode (or any other webrtc-based solution) for broadcasting. The situation is different both in the sense of use cases and the technological constraints.

I have set up quite a few events in which the low-latency use-cases are handled with Licode. The broadcasting situations were handled with Icecast2 (Icecast2 scales very well and is extremely easy to set up). Apparently the streams were converted to suitable formats for broadcasting.

But you should expect over 20 second latency in broadcasting. In many cases the actual latency is at least 60-120 seconds or even more, which is useless for low-latency use cases.

In other words: Licode is very, very good with low-latency use-cases. But I would not consider it a tool for broadcasting (i.e. for situations in which the latency is not a problem).

The aims for using tools like Licode and Icecast2 are quite different, IMHO. Even in the case that the streams are the same (meaning that they have the same content).


#3

I was a bit inaccurate. Icecast2 is good for audio, which I am most interested in. For broadcasting video I would consider using a private Youtube channel and let Google to handle load balancing and similar issues.

At least in our analysis of several use cases we found out that most of the remote listeners/viewers (no interaction, only viewing or listening) do not really need a low latency solution.

The situation is, of course, different if your use case requires low latency or is interactice.


#4

@Francesco_Durighetto
Could you get Licode to work with TCP?


#5

not really. I mean, I tried various configuration of firewall rules and tried to block everything but dns (on UDP).
What happens is that the TURN came in action and passed everything thru him in TCP.
Even if candidates were all udp.
but not directly in tcp to Licode.


#6

Thanks for the reply.
My I ask what TURN server do you use/recommend?
Are you happy with the result in case of latency and quality?


#7

Used for a year xirsys, good but nothing special. was mainly located in US and when our business moved to Europe was a little bit unstable. I think it wasn’t geolocalized.
Now they claim they have a new ecosystem with v3 which is more scalable and geolocalized.
But we already moved to Twilio Global Network Traversal Service which I absolutely recommend.
They have tons of geo located machines and they use Amazon as provider, which for me is a assurance.


#8

Thanks for your advice. Actually I am looking for a software tool like coturn to have TURN on my own servers.


#9

this could be an option but I recommend to place a load balancer and at least one turn deployed in Europe and one turn deployed in America to benefit low latency.


#10

Good point. I’ll keep that in mind. :ok_hand:
Thanks.