Question about nandwidth usage after audio / video stream is enabled or disabled

Hello everybody,

I describe quickly my project so you can understand the goal of my question
better :

“In a chat room, people can share their video, but only one person can
speak at once (by clicking a dedicated icon). When a person has nothing
else to say, another person can speak.”

In this project, the bandwidth usage is very important.
So I tried different things to publish audio streams only for the person
who is currently speaking, while the video stream is published all the time.

I found different post here, and the one who was the most interesting was
this one :
https://groups.google.com/forum/#!searchin/lynckia/getAudioTracks()/lynckia/WN7I6Gz9QCA/k_6IFSlLL8wJ

So I tried both solution, but in both case I have a small problem, so maybe
someone here can give me an advice to solve it.

Solution 1. audio and video in on stream (audio will be enabled only on
request)

If I want the publisher to speak, I just enable his audio stream with this
line :
localStream.stream.getAudioTracks()[0].enabled = true;

and I disable the audio stream for all the other publishers with this line
:
localStream.stream.getAudioTracks()[0].enabled = false;

This is working good, as soon as someone start speaking, his voice can be
listened instantly by all the subscribers.

The problem : when the audio stream is disabled (from the publisher side),
the bandwidth used by the stream is the same as if it is enabled. Is that
normal ? So the bandwidth used to send the stream to all the viewer is the
same as if there was audio+video, even if only video is received. Is there
a way to improve this ?

Solution 2. creating one stream for audio only, and one stream for audio
only

Actually even if the audio stream can been published (I notice on the
server that the bandwidth match with an only audio stream), I was unable to
play it when a subscribe subscribe to it. I don’t know if I did something
wrong, but it seems that only video or video+audio can be subscribed
successfully. Only the audio stream is not played.

For both solution : maxAudioBW and maxVideoBW are used (tested with Licode
release 1523)

For both solution when I publish the stream, I used the
parameters maxAudioBW and maxVideoBW to limit the stream bandwidth usage
and have a better control to check the bandwidth usage.

I noticed that “64” seems to be the minimum value for the maxAudioBW. If I
set a lower value, the room.publish instruction seems not to work, or
something wrong happens and the stream cannot be viewed or published.

So finally I have an audio stream that uses almost the same as the video
stream even if it’s used only for 1 person at a time.

Is there a way to improve the audio bandwidth in my situation ?

Thank you for your help.

Guillaume

Hello again.

I just tried “Solution 2” again with a fresh basic example and it’s
working finally. I surely did something wrong in my previous test.
So I think I can use this solution at the beginning for my project.

Sorry to disturb you finally.

Thank you.
Guillaume.Le jeudi 29 mai 2014 12:53:38 UTC+2, Guillaume Lepinay a écrit :

Hello everybody,

I describe quickly my project so you can understand the goal of my
question better :

“In a chat room, people can share their video, but only one person can
speak at once (by clicking a dedicated icon). When a person has nothing
else to say, another person can speak.”

In this project, the bandwidth usage is very important.
So I tried different things to publish audio streams only for the person
who is currently speaking, while the video stream is published all the time.

I found different post here, and the one who was the most interesting was
this one :

https://groups.google.com/forum/#!searchin/lynckia/getAudioTracks()/lynckia/WN7I6Gz9QCA/k_6IFSlLL8wJ

So I tried both solution, but in both case I have a small problem, so
maybe someone here can give me an advice to solve it.

Solution 1. audio and video in on stream (audio will be enabled only on
request)

If I want the publisher to speak, I just enable his audio stream with this
line :
localStream.stream.getAudioTracks()[0].enabled = true;

and I disable the audio stream for all the other publishers with this line
:
localStream.stream.getAudioTracks()[0].enabled = false;

This is working good, as soon as someone start speaking, his voice can be
listened instantly by all the subscribers.

The problem : when the audio stream is disabled (from the publisher side),
the bandwidth used by the stream is the same as if it is enabled. Is that
normal ? So the bandwidth used to send the stream to all the viewer is the
same as if there was audio+video, even if only video is received. Is there
a way to improve this ?

Solution 2. creating one stream for audio only, and one stream for audio
only

Actually even if the audio stream can been published (I notice on the
server that the bandwidth match with an only audio stream), I was unable to
play it when a subscribe subscribe to it. I don’t know if I did something
wrong, but it seems that only video or video+audio can be subscribed
successfully. Only the audio stream is not played.

For both solution : maxAudioBW and maxVideoBW are used (tested with
Licode release 1523)

For both solution when I publish the stream, I used the
parameters maxAudioBW and maxVideoBW to limit the stream bandwidth usage
and have a better control to check the bandwidth usage.

I noticed that “64” seems to be the minimum value for the maxAudioBW. If I
set a lower value, the room.publish instruction seems not to work, or
something wrong happens and the stream cannot be viewed or published.

So finally I have an audio stream that uses almost the same as the video
stream even if it’s used only for 1 person at a time.

Is there a way to improve the audio bandwidth in my situation ?

Thank you for your help.

Guillaume