WebRTC applications need to do several things;
1.get streaming audio,video or other data
2.get network information such as IP addresses and ports, and exchange this with other WebRTC clients to enable connection
3.report errors and initiate or close sessions
4.exchange information about media and client capability
5.communicate streaming audio,video or data
MediaStream
The MediaStream API is one part of the webRTC.
Each MediaStream has an input,which might be a media stream generated by navigator.getUserMedia().
example:
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
var constraints = {audio: false, video: true};
var video = document.querySelector(“video”);
function successCallback(stream) {
window.stream = stream; // stream available to console
if (window.URL) {
video.src = window.URL.createObjectURL(stream);
} else {
video.src = stream;
}
video.play();
}
function errorCallback(error){
console.log(“navigator.getUserMedia error: “, error);
}
navigator.getUserMedia(constraints,successCallback,errorCallback);
It will take 3 parameters:
1.a constraints object
2.a success callback,which is passed a MediaStream
3.an error callback
The way to exchange the data and the offer/answer architecture is called JSEP,JavaScript Session Establishment Protocol.