We have a Licode based videoconference application available through a web application and an iOS app. Now we want to handle connections bandwidth letting users with a good connection talk between them with a great connectivity and let users with worse connections to be inside the session without having connectivity issues.
So our question is if the MCU has any bandwidth handling behaviour against different subscribers connections, that is one user publishes his stream at 1000kbps and one of the subscriber can’t handle that bandwidth. Does the MCU recode the stream reducing the bandwidth or has some similar behaviour? Or does it only act as a relayer? We have done some tests and it looked like the MCU didn’t reduce nothing, so the subscriber couldn’t see the stream successfully (we limited from 1000kbps to 200 kbps).
Also, we are interested in disabling the video stream track on MCU-subscriber connection, but when doing this remotely in subscriber, video is not shown but bandwidth consumed is still high (it simply hides the video, but still receives it).
Our goal is to have 3 options for each published stream. First option should be audio and high quality video, second option audio with low quality video and third option only audio. Then subscribers should be able to change between these options depending on their connectivity (or let MCU change options for each subscribers).
If we are not wrong, we can set each option as an spatial layer, but how is each spatial layer config specified?
Can this behaviour be achieved with simulcast? It would be great to know for not starting working in something you are working in.
AFAIK simulcasting is managed by browser and you have no control over it. In your scenario I suggest to use individual streams for audio and video. All users subscribe to the audio only stream but you may provide them control over video (e.g. pause to unsubscribe). Regarding the video quality if you’ve enabled the simulcast then each user will receive the optimum quality according to his connection speed and number of spatial layers.
Publishing more than one stream for each mentioned option is what we thought to do, but it is a mess for our application logic. If simulcast is capable to set the optimum quality it should be enough for us.
Also, we understand that the number of spatial layers (passed as a parameter) is the same as the “number of different quality levels”, is this correct? Setting more than 2 spatial layers can highly increase the CPU usage?
Based on the clients we’ve seen, there’s no great impact on the client performance if you compare simulcast with 2 spatial qualities and publishing 2 streams. It remained no higher than 10% of CPU, and it could be even lower in some cases.
First of all, I would like to thank you for the work done. We have tested spatial layers (only with Chrome and 2 spatial layers) and it worked better than we expected, so it does what we need.
The only issue we had is that it only work with resolutions up to 640x480, we tried publishing streams with a HD resolution (1280x720) and Full HD (1920x1080) but the video track wasn’t published, the subscribers only could see audio. If we remove simulcast parameter when publishing it works perfect, so is there any incompatibility with higher resolutions?
And finally, is there any plan to add this functionality to P2P connections?
You don’t need simulcast in a p2p scenario.
When you publish to someone at the other end, your stream will change it’s quality based on your receiver reports. so you don’t need to send him different quality layers. you always send him (automatically) the best video he can receive. and in a p2p-many-to-many room (which is a scenario I will strongly avoid to use), is the same, since you’ll open a different peerconnection for every subscriber of yours. every peerconnection will auto adjust itself based on receiver reports.