On your webrtc mixer

Hi Yan,

You can check our current development in the “media” branch:

https://github.com/ging/lynckia/tree/media

We will announce when we move it to the master branch. But you’re free
to develop and propose your own transcoder to the list and also to the
github project.

Cheers,
Javier.On 4 December 2012 17:21, Yan Tang tang.yan@gmail.com wrote:

Thanks for your explanation.

See my followup questions.

On Tue, Dec 4, 2012 at 10:42 AM, javier cerviño jcervino@dit.upm.es wrote:

Hi Yan,

Great!! I’m glad you like it!! I answer your questions below:

On 4 December 2012 16:33, Yan Tang tang.yan@gmail.com wrote:

Hi Javier,

I happen to read your webrtc mixer, and I played with your demo and it
looks
great.

However, I have a question about the mixer. It looks like what you did
in
the “mixer” is just a forwarding. It doesn’t include decoding and
re-encoding the video or audio from multiparties.

In the online demo the mixer doesn’t actually encode or decode. But
we’re finishing this feature now. You can check it at the code.
(www.lynckia.com).

I looked at the codes at www.lynckia.com after getting your email, ;).
But I didn’t see any place showing you are doing the decoding and encoding.
I guess probably that part is probably hidden on the server side? I am more
interested in this because this part can be performance critical if we want
to build a scalable application.

My understanding of your mixer/demo is as follows: You have a node.js
app
running as a webrtc client on your backend server. Then, when there is
a N
multiparty video conf call, each participant sends its media stream to
the
“mixer” and the mixer forwards it to N-1 participants. Am I right?

Yes, that’s the default behaviour. But as I mentioned before you can
modify the code.

We’ve recently create a mailing list for these kinds of questions. You
can subscribe here: www.lynckia.com/community.html

Thanks.

Yan

Thanks!

Cheers.
Javier

Very cool!

I will take a careful look. You guys did a great job on this project!On Tue, Dec 4, 2012 at 11:34 AM, javier cerviño jcervino@dit.upm.es wrote:

Hi Yan,

You can check our current development in the “media” branch:

https://github.com/ging/lynckia/tree/media

We will announce when we move it to the master branch. But you’re free
to develop and propose your own transcoder to the list and also to the
github project.

Cheers,
Javier.

On 4 December 2012 17:21, Yan Tang tang.yan@gmail.com wrote:

Thanks for your explanation.

See my followup questions.

On Tue, Dec 4, 2012 at 10:42 AM, javier cerviño jcervino@dit.upm.es wrote:

Hi Yan,

Great!! I’m glad you like it!! I answer your questions below:

On 4 December 2012 16:33, Yan Tang tang.yan@gmail.com wrote:

Hi Javier,

I happen to read your webrtc mixer, and I played with your demo and it
looks
great.

However, I have a question about the mixer. It looks like what you
did
in
the “mixer” is just a forwarding. It doesn’t include decoding and
re-encoding the video or audio from multiparties.

In the online demo the mixer doesn’t actually encode or decode. But
we’re finishing this feature now. You can check it at the code.
(www.lynckia.com).

I looked at the codes at www.lynckia.com after getting your email, ;).
But I didn’t see any place showing you are doing the decoding and
encoding.
I guess probably that part is probably hidden on the server side? I am
more
interested in this because this part can be performance critical if we
want
to build a scalable application.

My understanding of your mixer/demo is as follows: You have a node.js
app
running as a webrtc client on your backend server. Then, when there
is
a N
multiparty video conf call, each participant sends its media stream to
the
“mixer” and the mixer forwards it to N-1 participants. Am I right?

Yes, that’s the default behaviour. But as I mentioned before you can
modify the code.

We’ve recently create a mailing list for these kinds of questions. You
can subscribe here: www.lynckia.com/community.html

Thanks.

Yan

Thanks!

Cheers.
Javier