那些 audio api的事 (一) AudioContext

援用 Getting Started with Web Audio API
http://www.html5rocks.com/en/tutorials/webaudio/intro/

Introduction

Audio on the web has been fairly primitive up to this point and until very recently has had to be delivered through plugins such as Flash and QuickTime. The introduction of the audio element in HTML5 is very important, allowing for basic streaming audio playback. But, it is not powerful enough to handle more complex audio applications. For sophisticated web-based games or interactive applications, another solution is required. It is a goal of this specification to include the capabilities found in modern game audio engines as well as some of the mixing, processing, and filtering tasks that are found in modern desktop audio production applications.

在收集上的音频已相称普遍的时期,但直到近来我们想听音频,不得不经由历程如Flash和QuickTime的插件播放。在HTML5音频元素的引入是异常重要的,音频元素许可基础的流式音频播​​放。然则,它没法处置惩罚更庞杂的音频运用。关于庞杂的基于Web的游戏或交互式运用,另一种处理方案是必须的。这是本说明书的目标,包括在当代游戏音频引擎音频夹杂处置惩罚,当代桌面音频制造运用程序所包括的处置惩罚和过滤使命。

The APIs have been designed with a wide variety of use cases in mind. Ideally, it should be able to support any use case which could reasonably be implemented with an optimized C++ engine controlled via JavaScript and run in a browser. That said, modern desktop audio software can have very advanced capabilities, some of which would be difficult or impossible to build with this system. Apple’s Logic Audio is one such application which has support for external MIDI controllers, arbitrary plugin audio effects and synthesizers, highly optimized direct-to-disk audio file reading/writing, tightly integrated time-stretching, and so on. Nevertheless, the proposed system will be quite capable of supporting a large range of reasonably complex games and interactive applications, including musical ones. And it can be a very good complement to the more advanced graphics features offered by WebGL. The API has been designed so that more advanced capabilities can be added at a later time.

这些API已设想可以满足林林总总的运用案例。抱负的是,它应当可以支撑任何运用状况下它可以合理地具有优化C ++引擎经由历程JavaScript掌握并在浏览器中运转来完成。这就是说,当代台式音频软件可以有异常高等的功用,个中的一些将难以或不能够竖立与该体系。苹果公司的Logic音频就是如许一个运用程序,它具有外部MIDI掌握器,​​恣意插件音频结果和合成,高度优化的直接到磁盘的音频文件读取/写入,支撑严密集成的时候伸缩,等等。尽管如此,所提出的体系将是相称可以支撑大局限的相称庞杂的游戏和交互式运用程序,包括音乐的。它可所以一个很好的补充,经由历程WebGL的供应更先进的图形功用。该API被设想成使更高等的功用可以在今后的时候被到场。

Features

The API supports these primary features:

  • Modular routing for simple or complex mixing/effect architectures,

including multiple sends and submixes.

  • High dynamic range, using 32bits floats for internal processing.

  • Sample-accurate scheduled sound playback with low latency for musical

applications requiring a very high degree of rhythmic precision such
as drum machines and sequencers. This also includes the possibility
of dynamic creation of effects.

  • 模块化路由简朴或庞杂的夹杂/结果架构,包括多个发送和子混音。

  • 高动态局限,采纳32bits floats举行内部处置惩罚。

  • 采样正肯定时播放的声响与音乐的低耽误 须要的节拍等邃密精美水平异常高的运用 作为鼓机和定序。 这也包括了能够性动态建立的结果。

  • Automation of audio parameters for envelopes, fade-ins / fade-outs,

granular effects, filter sweeps, LFOs etc.

  • Flexible handling of channels in an audio stream, allowing them to be

split and merged.

  • Processing of audio sources from an audio or video media element.

  • 自动化音频参数信封,淡入/淡出,颗粒结果,过滤器扫描,低频振荡器等。

  • 无邪的处置惩罚在音频流的信道,使它们成为拆分和兼并。

  • 处置惩罚从音频或视频的媒体元素的音频源。

  • Processing live audio input using a MediaStream from getUserMedia().

Integration with WebRTC Processing audio received from a remote peer
using a MediaStreamAudioSourceNode and [webrtc].

  • Sending a generated or processed audio stream to a remote peer using

a MediaStreamAudioDestinationNode and [webrtc].

  • Audio stream synthesis and processing directly in JavaScript.

  • 运用MediaStream的getUserMedia()要领处置惩罚现场音频输入。
    运用MediaStreamAudioSourceNode和[WebRTC],以完成将WebRTC从长途对等体吸收的音频处置惩罚整合

  • 应用发送天生或处置惩罚的音频流发送到长途对等 一个MediaStreamAudioDestinationNode和[WebRTC]。

  • 在JavaScript中直接处置惩罚音频流合成和加工。

  • Spatialized audio supporting a wide range of 3D games and immersive environments:

    • Panning models: equalpower, HRTF, pass-through

    • Distance Attenuation

    • Sound Cones

    • Obstruction / Occlusion

    • Doppler Shift

    • Source / Listener based

  • A convolution engine for a wide range of linear effects, especially very high-quality room effects. Here are some examples of possible effects:

    • Small / large room

    • Cathedral

    • Concert hall

    • Cave

    • Tunnel

    • Hallway

    • Forest

    • Amphitheater

    • Sound of a distant room through a doorway

    • Extreme filters

    • Strange backwards effects

    • Extreme comb filter effects

  • Dynamics compression for overall control and sweetening of the mix

  • Efficient real-time time-domain and frequency analysis / music visualizer support

  • Efficient biquad filters for lowpass, highpass, and other common filters.

  • A Waveshaping effect for distortion and other non-linear effects

  • Oscillators

  • 平面音效支撑多种3D游戏和沉醉式环境

  • 卷积引擎普遍的非线性效应,特别是异常高品质的室内结果。

  • 动态紧缩夹杂的整体掌握和甜味剂

  • 高效的及时的时域和频剖析/音乐可视化支撑

  • 高效双二阶滤波器的低通,高通,和其他经常使用的过滤器

  • 失真等非线性效应整波效应

Modular Routing

Modular routing allows arbitrary connections between different AudioNode objects. Each node can have inputs and/or outputs. A source node has no inputs and a single output. A destination node has one input and no outputs, the most common example being AudioDestinationNode the final destination to the audio hardware. Other nodes such as filters can be placed between the source and destination nodes. The developer doesn’t have to worry about low-level stream format details when two objects are connected together; the right thing just happens. For example, if a mono audio stream is connected to a stereo input it should just mix to left and right channels appropriately.

模块化路由许可差别的对象AudioNode之间的恣意衔接。每一个节点可以具有输入和/或输出。
源节点没有输入和一个输出。目标节点有一个输入和没有输出,最罕见的例子是AudioDestinationNode终究目标地到音频硬件。如过滤器的其他节点可以部署在源和目标地节点之间。开发人员没必要忧郁初级别的流花样的细节时,两个物体衔接在一起;正确的事变只是发作。
比方,假如一个单声道的音频流被衔接到平面声输入应当只是夹杂到左和右声道恰当。

In the simplest case, a single source can be routed directly to the output. All routing occurs within an AudioContext containing a single AudioDestinationNode:

《那些 audio api的事 (一) AudioContext》

在最简朴的状况下,单个源可直接路由到输出​​。一切的路由时包括一个AudioDestinationNode的AudioContext内

AudioContext

一个AudioContext是用于治理和播放一切的声响。为了临盆运用Web音频API声响,建立一个或多个声源,并将它们衔接到由所供应的声响目标地AudioContext 实例。这方面并不须要是直接的,而且可以经由历程任何数目的中心的AudioNodes充任用于音频信号处置惩罚的模块。

AudioContext的单个实例可以支撑多个声响输入和庞杂的音频图表,所以我们只须要个中的一个,由于我们建立的每一个音频运用程序。
很多风趣的收集音频API的功用,如建立AudioNodes和音频文件数据举行解码的AudioContext的要领

下面的代码片断建立了一个AudioContext:

var context;
window.addEventListener('load', init, false);
function init() {
  try {
    // Fix up for prefixing
    window.AudioContext = window.AudioContext||window.webkitAudioContext;
    context = new AudioContext();
  }
  catch(e) {
    alert('Web Audio API is not supported in this browser');
  }
}

Loading sounds

收集音频API运用AudioBuffer用于中短长度的声响。
其基础要领是运用XMLHttpRequest举行提取声响文件。
API支撑加载多种花样的音频文件数据,如WAV,MP3,AAC,OGG等。
针对差别的音频花样支撑的浏览器有所差别。

var dogBarkingBuffer = null;
// Fix up prefixing
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();

function loadDogSound(url) {
  var request = new XMLHttpRequest();
  request.open('GET', url, true);
  request.responseType = 'arraybuffer';

  // Decode asynchronously
  request.onload = function() {
    context.decodeAudioData(request.response, function(buffer) {
      dogBarkingBuffer = buffer;
    }, onError);
  }
  request.send();
}

音频文件数据为二进制(非文本),所以我们设置请求responseType是“arraybuffer”。
有关ArrayBuffers的更多信息,
一旦(未译码)的音频文件数据已被吸收,则可以坚持摆布购置解码,或许它可被解码立时运用AudioContext decodeAudioData()要领。
这类要领须要存储在request.response音频文件数据的ArrayBuffer并异步举行解码(不壅塞主JavaScript的实行线程)。
当decodeAudioData()完成后,它挪用的回调函数,它供应解码PCM音频数据作为AudioBuffer。

一旦被加载一个或多个AudioBuffers,那末我们就可以播放声响。
让我们假定我们方才加载的AudioBuffer有狗叫的声响,而且已完成加载历程。
然后我们就可以玩这个缓冲区用下面的代码。

// Fix up prefixing
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();

function playSound(buffer) {
  var source = context.createBufferSource(); // creates a sound source
  source.buffer = buffer;                    // tell the source which sound to play
  source.connect(context.destination);       // connect the source to the context's destination (the speakers)
  source.start(0);                           // play the source now
                                             // note: on older systems, may have to use deprecated noteOn(time);
}

Dealing with time: playing sounds with rhythm

收集音频API许可开发者正确部署播放。为了证实这一点,让我们竖立一个简朴的节拍轨道。
个中hihat是发挥每一个八分音符,又踢又收罗弄法交替每季度,在4/4的时候。
假定我们已加载了踢,骗局和hihat缓冲区,要做到这一点的代码很简朴:

for (var bar = 0; bar < 2; bar++) {
  var time = startTime + bar * 8 * eighthNoteTime;
  // Play the bass (kick) drum on beats 1, 5
  playSound(kick, time);
  playSound(kick, time + 4 * eighthNoteTime);

  // Play the snare drum on beats 3, 7
  playSound(snare, time + 2 * eighthNoteTime);
  playSound(snare, time + 6 * eighthNoteTime);

  // Play the hi-hat every eighth note.
  for (var i = 0; i < 8; ++i) {
    playSound(hihat, time + i * eighthNoteTime);
  }
}

不要随意马虎尝试 那画面太美 我不敢直视

Changing the volume of a sound

一,你能够想要做一个声响的最基础的支配是转变它的音量。
经由历程GainNode才运用Web API的音频,我们可以路由我们的源到目标地支配量:
《那些 audio api的事 (一) AudioContext》

此衔接竖立可以完成以下:

// Create a gain node.
var gainNode = context.createGain();
// Connect the source to the gain node.
source.connect(gainNode);
// Connect the gain node to the destination.
gainNode.connect(context.destination);

经由图已建立,你可以经由历程编程支配gainNode.gain.value以下转变音量:

// Reduce the volume.
gainNode.gain.value = 0.5;

你们试下调个几千倍 那排场不敢直视

Cross-fading between two sounds

如今,假定我们有一个轻微庞杂的状况,我们正在玩多种声响,但要逾越二者之间退色。
这是在一个DJ状的运用,个中,我们有两个唱盘和愿望可以从一个声源到平移到另一种罕见的状况。
《那些 audio api的事 (一) AudioContext》
要运用如许的功用设置此,我们简朴地建立两个GainNodes,并经由历程衔接节点每一个源

function createSource(buffer) {
  var source = context.createBufferSource();
  // Create a gain node.
  var gainNode = context.createGain();
  source.buffer = buffer;
  // Turn on looping.
  source.loop = true;
  // Connect source to gain.
  source.connect(gainNode);
  // Connect gain to destination.
  gainNode.connect(context.destination);

  return {
    source: source,
    gainNode: gainNode
  };
}

Equal power crossfading

一个无邪的线性淡入淡出的体式格局表现出你的样本之间平移量畅游。
《那些 audio api的事 (一) AudioContext》

为了处理这一题目,我们运用一个等功率曲线,个中所述响应增益曲线黑白线性的,并订交以更高的幅度。
这最大限制地削减体积骤降音频地区之间,从而致使更匀称的交织衰减,多是在电平略有差别地区之间。
《那些 audio api的事 (一) AudioContext》

Playlist crossfading

另一种罕见的腻滑转换运用是一个音乐播放器运用。
当一首歌曲的变化,我们愿望在淡出当前曲目了,褪去了新的,避免了不和谐的过渡。
要做到这一点,部署交织淡入淡出的将来。
虽然我们可以运用的setTimeout实行此调理,这是不正确的。
跟着Web API的音频,我们可以运用AudioParam界面部署为参数的将来值,如GainNode的增益值。
因而,给定播放列表,我们可以轨道之间经由历程调理当前播放的曲目上的增益下落,并在接下来的一个增益进步升学,双双小幅之前,当前曲目播放完毕:

function playHelper(bufferNow, bufferLater) {
  var playNow = createSource(bufferNow);
  var source = playNow.source;
  var gainNode = playNow.gainNode;
  var duration = bufferNow.duration;
  var currTime = context.currentTime;
  // Fade the playNow track in.
  gainNode.gain.linearRampToValueAtTime(0, currTime);
  gainNode.gain.linearRampToValueAtTime(1, currTime + ctx.FADE_TIME);
  // Play the playNow track.
  source.start(0);
  // At the end of the track, fade it out.
  gainNode.gain.linearRampToValueAtTime(1, currTime + duration-ctx.FADE_TIME);
  gainNode.gain.linearRampToValueAtTime(0, currTime + duration);
  // Schedule a recursive track change with the tracks swapped.
  var recurse = arguments.callee;
  ctx.timer = setTimeout(function() {
    recurse(bufferLater, bufferNow);
  }, (duration - ctx.FADE_TIME) * 1000);
}

Applying a simple filter effect to a sound

《那些 audio api的事 (一) AudioContext》
该收集音频API许可从一个音频节点到另一个你管的声响,制造一个潜伏的庞杂的处置惩罚器链庞杂的结果添加到您的soundforms。
如许做的一个要领是把你的声源和目标地之间BiquadFilterNodes。
这类范例的音频节点可以做种种的可用于构建图形均衡器,以至更庞杂的结果,重要是为了做挑选个中的声响的频谱的部份强调并礼服低阶滤波器。
支撑的范例的过滤器包括:

  • Low pass filter

  • High pass filter

  • Band pass filter

  • Low shelf filter

  • High shelf filter

  • Peaking filter

  • Notch filter

  • All pass filter

和一切的过滤器所包括的参数,指定一定量的增益,在该运用过滤器的频次和品质因数。
低通滤波器坚持较低的频次局限,但抛弃高频。
折断点由频次值肯定,而且Q因子是无单元,并肯定该图的外形。
增益只影响特定的过滤器,如低货架和峰值滤波器,而不是本低通滤波器。

在平常状况下,频次掌握须要举行调解事情在对数标度,由于人的听觉自身的事情道理雷同(即,A 4为440Hz的,和A5是880hz)。
欲了解更多详细信息,请参阅上面的源代码链接FilterSample.changeFrequency功用。
末了,请注意,示例代码,您可以衔接和断开滤波器,动态变化的AudioContext图。
我们可以经由历程挪用node.disconnect(outputNumber)断开图AudioNodes。
比方,要从新路线图从去经由历程一个过滤器,以直接衔接,我们可以做到以下几点:

// Disconnect the source and filter.
source.disconnect(0);
filter.disconnect(0);
// Connect the source directly.
source.connect(context.destination);
    原文作者:andypinet
    原文地址: https://segmentfault.com/a/1190000003115198
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞