且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

如何将 getUsermedia 音频流转换为 blob 或缓冲区?

更新时间:2023-02-04 17:26:34

根据@MuazKhan 的评论,使用 MediaRecorder(在 Firefox 中,最终将在 Chrome 中)或 RecordRTC/etc 将数据捕获到 blob 中.然后您可以通过以下几种方法之一将其导出到服务器进行分发:WebSockets、WebRTC DataChannels 等.请注意,这些不能保证实时传输数据,而且 MediaRecorder 还没有比特率控制.如果传输延迟,数据可能会在本地累积.

Per @MuazKhan's comment, use MediaRecorder (in Firefox, eventually will be in Chrome) or RecordRTC/etc to capture the data into blobs. Then you can export it via one of several methods to the server for distribution: WebSockets, WebRTC DataChannels, etc. Note that these are NOT guaranteed to transfer the data in realtime, and also MediaRecorder does not yet have bitrate controls. If transmission is delayed, data may build up locally.

如果实时(重新)传输很重要,强烈考虑使用 PeerConnection 代替服务器(根据@Robert 的评论),然后将其转换为流.(如何完成取决于服务器,但您应该对 Opus 数据进行编码以重新打包或解码和重新编码.)虽然重新编码通常不好,但在这种情况下,您***通过 NetEq 进行解码(webrtc.org 堆栈的抖动缓冲区和 PacketLossConcealment 代码)并获得干净的实时音频流,以重新编码流媒体,处理丢失和抖动.

If realtime (re)transmission is important, strongly consider using instead a PeerConnection to a server (per @Robert's comment) and then transform it there into a stream. (How that is done will depend on the server, but you should have encoded Opus data to either repackage or decode and re-encode.) While re-encoding is generally not good, in this case you would do best to decode through NetEq (webrtc.org stack's jitter-buffer and PacketLossConcealment code) and get a clean realtime audio stream to re-encode for streaming, with loss and jitter dealt with.