- Learning WebRTC
- Dan Ristic
- 396字
- 2021-07-16 13:53:43
Handling multiple devices
In some cases, users may have more than one camera or microphone attached to their device. This is especially the case on mobile devices that often have a front-facing camera and a rear-facing one. In this case, you want to search through the available cameras or microphones and select the appropriate device for your user's needs. Fortunately, to do this, an API called MediaSourceTrack
is exposed to the browser.
Note
On the other hand, since many of these APIs are still being created, not everything will be supported by all the browsers. This is especially the case with MediaSourceTrack
, which is only supported in the latest version of Chrome at the time of writing this book.
With MediaSourceTrack
, we can ask for a list of devices and select the one we need:
MediaStreamTrack.getSources(function(sources) { var audioSource = null; var videoSource = null; for (var i = 0; i < sources.length; ++i) { var source = sources[i]; if(source.kind === "audio") { console.log("Microphone found:", source.label, source.id); audioSource = source.id; } else if (source.kind === "video") { console.log("Camera found:", source.label, source.id); videoSource = source.id; } else { console.log("Unknown source found:", source); } } var constraints = { audio: { optional: [{sourceId: audioSource}] }, video: { optional: [{sourceId: videoSource}] } }; navigator.getUserMedia(constraints, function (stream) { var video = document.querySelector('video'); video.src = window.URL.createObjectURL(stream); }, function (error) { console.log("Raised an error when capturing:", error); }); });
This code calls getSources
on MediaSourceTrack
, which will give you a list of sources attached to the user's device. You can then iterate through them and select the one preferable to your application. If you open the development console while running this code, you will see the devices currently connected to the computer printed out. For instance, my computer has two microphones and one camera, as shown in the following screenshot:

The source may also contain information such as which direction it faces to help with selection. With more progress and time, the browser could potentially provide a lot more information about the supported resolutions, frames per second (fps), and more about the different devices available. Always be sure to research the latest updates on the getUserMedia
and MediaStreamTrack
API to see which browsers have added more features.
Creating a photo booth application
One of the best parts of the Web is that everything works together. This makes creating complex applications, such as a photo booth application, easy with other APIs like Canvas. A photo booth application allows you to see yourself on the screen while being able to capture pictures of yourself, much like a real photo booth. The Canvas API is a set of arbitrary methods to draw lines, shapes, and images on the screen. This is popularized through the use of Canvas for games and other interactive applications across the Web.
In this project, we are going to use the Canvas API to draw a frame of our video to the screen. It will take the current feed in our video
element, translate it into a single image, and draw that image to a <canvas>
element. We will set up our project with a simple HTML file:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <title>Learning WebRTC - Chapter 2: Get User Media</title> <style> video, canvas { border: 1px solid gray; width: 480px; height: 320px; } </style> </head> <body> <video autoplay></video> <canvas></canvas> <button id="capture">Capture</button> <script src="photobooth.js"></script> </body> </html>
Tip
Downloading the example code
You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
All we have done is added a canvas to the page and are now looking for the photobooth.js
file. Our JavaScript file is where the functionality lies:
function hasUserMedia() { return !!(navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia); } if (hasUserMedia()) { navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia; var video = document.querySelector('video'), canvas = document.querySelector('canvas'), streaming = false; navigator.getUserMedia({ video: true, audio: false }, function (stream) { video.src = window.URL.createObjectURL(stream); streaming = true; }, function (error) { console.log("Raised an error when capturing:", error); }); document.querySelector('#capture').addEventListener('click', function (event) { if (streaming) { canvas.width = video.clientWidth; canvas.height = video.clientHeight; var context = canvas.getContext('2d'); context.drawImage(video, 0, 0); } }); } else { alert("Sorry, your browser does not support getUserMedia."); }
Now, you should be able to click on the Capture button and capture one frame of the video feed on the canvas. The image will show up as a single frame inside the <canvas>
element. You can keep taking pictures to replace the image over and over again:

- 深度實(shí)踐OpenStack:基于Python的OpenStack組件開(kāi)發(fā)
- Qt 5 and OpenCV 4 Computer Vision Projects
- Spring Boot開(kāi)發(fā)與測(cè)試實(shí)戰(zhàn)
- C語(yǔ)言程序設(shè)計(jì)(第2 版)
- arc42 by Example
- Offer來(lái)了:Java面試核心知識(shí)點(diǎn)精講(原理篇)
- 實(shí)用防銹油配方與制備200例
- 人臉識(shí)別原理及算法:動(dòng)態(tài)人臉識(shí)別系統(tǒng)研究
- Java程序設(shè)計(jì)與實(shí)踐教程(第2版)
- 精通網(wǎng)絡(luò)視頻核心開(kāi)發(fā)技術(shù)
- Visual C++開(kāi)發(fā)入行真功夫
- 單片機(jī)原理及應(yīng)用技術(shù)
- C++從入門到精通(第6版)
- Learning Splunk Web Framework
- 視窗軟件設(shè)計(jì)和開(kāi)發(fā)自動(dòng)化:可視化D++語(yǔ)言