Feeding audio streams to speech recognition engine on server

Hi

I am new to WebRTC and found about Licode today. Sorry if my question is
vague. I am trying to find out what I can achieve with Licode.

Say my server has a speech recognition engine and the server is capable of
receiving audio streams from remote web browsers (Chrome) through WebRTC
peer connections. I would like to feed the received audio streams to the
speech recognition engine so it can convert speech to text and then send
back the resulting text to the browser real-time.

With Licode is it possible to receive audio streams and process them on the
server (for example save them in .WAV file) or feed them to other
components such as a speech recognition engine?

Many thanks