且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

如何更改缓冲区-双缓冲

更新时间:2021-07-03 19:33:45

您在这里没有提及它,但是从您先前的问题中我知道您实际上是想在客户端应用程序中播放音频,通过套接字连接接收音频.我只打算在这里处理服务器端.

在我看来,您对API文档有关使用双缓冲的注释的关注过多.是的,您应该这样做是为了避免因缺少音频而导致的时间间隔,从而导致爆裂声"和音频丢失,但不要以此来决定应用程序其余部分的体系结构.

为了分离功能,我建议您为服务器中的以下3个区域设置类(您的服务器显然将具有更多的类和功能,但是我将参考这些类):
-音频源处理程序,负责初始化WaveIn设备并对新捕获的音频数据做出反应.
-文件处理程序,负责创建音频文件,向文件添加新音频,关闭文件等.
-网络处理程序,负责接受传入的客户端连接并将音频数据包发送到连接的客户端.

创建两个缓冲区以捕获WaveIn设备中的音频时,必须确定这些缓冲区的大小.如果它们足够大以包含0.5-1秒或更少的音频,通常会很好.
在初始化期间,您可以使用waveInAddBuffer()和waveInPrepareHeader()引用这两个缓冲区.
现在,每次收到当前缓冲区已满的信号时,应执行以下步骤:
-调用waveInAddBuffer()将音频捕获切换为使用其他缓冲区.
-将刚刚填充的缓冲区副本传递给File处理程序.
-将缓冲区的另一个副本传递给网络处理程序.

我知道这看起来好像很多复制,但是通过这种方式,用于捕获音频的缓冲区始终可以交换,文件处理不会因为网络问题而受到损害,反之亦然.
您需要一种将音频数据块传递到File和Network处理程序的机制,以便Audio Source处理程序可以摆脱数据,而不必担心会发生什么.有几种方法可以做到这一点,除了建议您使用非阻塞方法外,在这里我将不做任何详细说明.


希望对您有所帮助.

Soren Madsen
You do not mention it here, but I know from your previous questions that you actually want to play the audio in a client application, which receives the audio through a socket connection. I am only going to address the server side here.

It seems to me that you have focused too much on the API documentation''s notes about using double buffering. Yes, you should do this to avoid time gaps with missing audio resulting in "pops" and audio dropouts, but do not let this dictate the architecture of the rest of your application.

To separate the functionality, I would suggest you have classes set up for the following 3 areas in your server (your server will obviously have more classes and functionality, but these are the ones I will be referring to):
- Audio Source handler responsible for initializing the WaveIn device and reacting to new audio data being captured.
- File handler responsible for creating audio files, adding new audio to the files, closing files, etc.
- Network handler responsible for accepting incoming client connections and send audio packets to the connected client(s).

When you create the two buffers for capturing the audio from the WaveIn device, you have to decide how large these buffers are going to be. It is usually fine if they are large enough to contain 0.5-1 second of audio or even less.
During initialization, you reference the two buffers using waveInAddBuffer() and waveInPrepareHeader().
Now every time you get the signal that the current buffer is full, you should perform the following steps:
- Call waveInAddBuffer() to switch the audio capturing to use the other buffer.
- Pass a copy of the buffer that has just been filled to the File handler.
- Pass another copy of the buffer to the Network handler.

I know this might seem like a lot of copying, but this way the buffers used for capturing audio are always available for being swapped, file handling is not compromised due to network issues and vice versa.
You need a mechanism for passing a block of audio data to the File and Network handlers so the Audio Source handler can get rid of the data and not worry about what happens to it. There are several ways of doing this and I will not go into any details here, except recommend that you use a non-blocking method.


I hope this helps.

Soren Madsen