Android源码分析:AudioEffect

音效AudioEffect

如下图,应用程序开发者使用android.media.audiofx.AudioEffect来控制音效,它的子类包括:BassBoostEnvironmentalReverbEqualizerPresetReverbVirtualizer

AudioEffect的接口如激活/去激活等,它将调用到JNI层,而JNI层将调用C++AudioEffect,后者再通过接口IEffect指针指向的BpEffect代理对象跨进程调用到ServerAudioFlingerEffectHandle

 

 

AudioEffect
AudioRecord.cpp

 

调用API

android.media.audiofx.AudioEffect

Apps

JNI
android_media_AudioEffectcpp

 

IEffect
IEffect.cpp

Binder IPC

 

AudioFlinger::EffectHandle

BnEffect

BnInterface<IEffect>

Java

C/C++

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

音效的处理在Server侧的AudioFlinger的单独线程中,它与播放线程(PlaybackThread)相关联。一个音效由EffectModule表示,它实际上是一个定义了统一接口的包裹(wrapper)类。具体的音效处理算法肯能封装在别的库里,EffectModule负责调用它们。

 

 

音效处理引擎

实际处理音效的是音效引擎(engine),往往封装在单独的库中。在Androidframeworks/base/media/libeffects目录下面,就有几个音效处理引擎。这些引擎要实现一定的API(见frameworks/base/include/media/EffectApi.h),供外部调用。在Android的音效框架中,作为音效引擎的wrapper类,EffectModule调用音效引擎API。在头文件EffectApi.h中定义的Effect控制接口如下,其实它就是包含了两个函数指针的结构体:

struct effect_interface_s {
effect_process_t process;
effect_command_t command;
};

但是经常使用的是effect_interface_t

typedef struct effect_interface_s **effect_interface_t;

process成员对应的函数类型如下:

typedef int32_t (*effect_process_t)(effect_interface_t self,
audio_buffer_t *inBuffer,
audio_buffer_t *outBuffer);

process是实际的音效处理函数,参数inBuffer是输入待处理的音频数据,也就是引擎从该缓冲区中读取音频数据从而进行处理,而outBuffer则是经过音效处理后的数据,即处理过的数据存放的地方。若没有提供buffer,即调用者传递进来的是NULL指针,则需使用配置命令EFFECT_CMD_CONFIGURE发送过来的buffer。配置命令所使用的配置信息定义如下:

typedef struct effect_config_s {
buffer_config_t inputCfg;
buffer_config_t outputCfg;;
} effect_config_t;

可见,在配置命令中会提供输入缓冲区和输出缓冲区配置信息。缓冲区配置信息定义如下,它的第一个成员是process要访问的缓冲区:

typedef struct buffer_config_s {
audio_buffer_t buffer; // buffer for use by process() function if not passed explicitly

uint32_t samplingRate; // sampling rate
uint32_t channels; // channel mask (see audio_channels_e)
buffer_provider_t bufferProvider; // buffer provider
uint8_t format; // Audio format (see audio_format_e)
uint8_t accessMode; // read/write or accumulate in buffer (effect_buffer_access_e)
uint16_t mask; // indicates which of the above fields is valid

} buffer_config_t;

如果配置命令EFFECT_CMD_CONFIGURE也没有明确指定缓冲区,则使用buffer_provider_t中的函数指针去获取对应的缓冲区:

typedef int32_t (* buffer_function_t)(void *cookie, audio_buffer_t *buffer);
typedef struct buffer_provider_s {
buffer_function_t getBuffer; // retrieve next buffer
buffer_function_t releaseBuffer; // release used buffer
void *cookie;// for use by client of buffer provider functions
} buffer_provider_t;

当收到激活命令EFFECT_CMD_ENABLE后,process函数就会被调用,当收到去激活命令EFFECT_CMD_DISABLE后,关闭音效处理后并回送ok后,就停止调用process函数。在音效引擎的process函数的实现中,不应该存在阻塞调用,如malloc/freesleepread/write/open/closepthread_cond_wait/pthread_mutex_lock等,应尽可能保证其实时性。

另一个成员command对应的函数类型是:

typedef int32_t (*effect_command_t)(effect_interface_t self,
uint32_t cmdCode,//
命令字
uint32_t cmdSize,//
多少字节
void *pCmdData,//
该命令对应的数据
uint32_t *replySize,//
字节
void *pReplyData);

它向音效引擎发送一个命令,然后返回引擎的response

定义的命令字有:

enum effect_command_e {
EFFECT_CMD_INIT, // initialize effect engine
EFFECT_CMD_CONFIGURE, // configure effect engine (see effect_config_t)
EFFECT_CMD_RESET, // reset effect engine
EFFECT_CMD_ENABLE, // enable effect process
EFFECT_CMD_DISABLE, // disable effect process
EFFECT_CMD_SET_PARAM, // set parameter immediately (see effect_param_t)
EFFECT_CMD_SET_PARAM_DEFERRED, // set parameter deferred
EFFECT_CMD_SET_PARAM_COMMIT, // commit previous set parameter deferred
EFFECT_CMD_GET_PARAM, // get parameter
EFFECT_CMD_SET_DEVICE, // set audio device (see audio_device_e)

EFFECT_CMD_SET_VOLUME, // set volume
EFFECT_CMD_SET_AUDIO_MODE, // set the audio mode (normal, ring, …)

EFFECT_CMD_FIRST_PROPRIETARY = 0×10000 // first proprietary command code
};

对上述各命令的解释参考文档,可参见代码注释。

 

有了上述的引擎控制接口effect_interface_t,我们就可以使用向其发送命令字进行初始化/重置/激活/去激活/配置等操作了,也可以使用process对音频数据进行音效处理了。但这个引擎控制接口是如何来的呢?在实现引擎的库里,必须实现下面几个函数接口:

typedef int32_t (*effect_QueryNumberEffects_t)(uint32_t *pNumEffects);

typedef int32_t (*effect_QueryEffect_t)(uint32_t index, effect_descriptor_t *pDescriptor);

typedef int32_t (*effect_CreateEffect_t)(effect_uuid_t *uuid,
int32_t sessionId,
int32_t ioId,
effect_interface_t *pInterface);

typedef int32_t (*effect_ReleaseEffect_t)(effect_interface_t interface);

第一个是effect_QueryNumberEffects_t,用于查询库中包含几个音效处理引擎,通过修改填充参数pNumEffects获得音效引擎数量。

第二个函数effect_QueryEffect_t,通过返回的数量枚举索引对应的音效描述符effect_descriptor_t,通常代码如下:

for (i = 0; i < num_effects; i++)
EffectQueryEffect(i,…);

num_effects是调用第一个接口函数所获得的支持的音效引擎数量。

第三个函数effect_CreateEffect_t就是创建对应的音效引擎了,它得到的就是引擎控制接口effect_interface_t了,这是最后一个参数传回来的值。第一个参数uuid音效描述符effect_descriptor_t的一个成员变量。当音效引擎不再使用时,调用第四个函数effect_ReleaseEffect_t来释放它。

总之,一个音效库中实现了上述几个接口API,我们就可以使用第一个函数来获取其支持的音效总数量,然后使用第二个函数枚举出所支持的音效对应的描述符,使用第三个函数则可以根据描述符中的一个uuid号获取对应的音效引擎控制接口,然后根据这个控制接口就可以调用它了,如发送命令、调用process处理音频等。在不用时,则可以调用第四个API函数来是释放资源。

 

音效引擎库的装载--EffectFactory

为了减少AudioFlinger与音效引擎库的耦合,音效引擎库的装载与创建对应的引擎控制接口专门由一个工厂(factory,设计模式工厂模式的运用)来完成。在运行时刻,它自动搜索设备上的路径“/system/lib/soundfx”下面的所有动态链接库,然后解析出对应的API函数符号,并将其存放在一个全局循环链表中。这样就可以得到系统中所有的音效引擎库。只要遵循了API接口规范,并将生成的库文件放置在规定的路径下即可。这个负责装载引擎库的源码位于文件frameworks/base/media/libeffects/factory/EffectsFactory.c中。在其init函数中,打开gEffectLibPath(即“/system/lib/soundfx”)目录,然后读取它下面的各个.so库文件,对这些库逐个调用loadLibrary解析出库文件的函数符号以及音效描述符,将这些信息添加到循环链表的节点上。下面的代码片段是解析库里面的函数符号:

// Check functions availability
queryNumFx = (effect_QueryNumberEffects_t)dlsym(hdl, “EffectQueryNumberEffects“);
if (queryNumFx == NULL) {
LOGW(“could not get EffectQueryNumberEffects from lib %s”, libPath);
ret = -ENODEV;
goto error;
}

queryFx = (effect_QueryEffect_t)dlsym(hdl, “EffectQueryEffect“);
if (queryFx == NULL) {
LOGW(“could not get EffectQueryEffect from lib %s”, libPath);
ret = -ENODEV;
goto error;
}
createFx = (effect_CreateEffect_t)dlsym(hdl, “EffectCreate“);
if (createFx == NULL) {
LOGW(“could not get EffectCreate from lib %s”, libPath);
ret = -ENODEV;
goto error;
}
releaseFx = (effect_ReleaseEffect_t)dlsym(hdl, “EffectRelease“);
if (releaseFx == NULL) {
LOGW(“could not get EffectRelease from lib %s”, libPath);
ret = -ENODEV;
goto error;
}

下面的代码是创建从一个库里解析出的信息的节点并添加到全局循环链表中:

// add entry for library in gLibraryList
l = malloc(sizeof(lib_entry_t));//
解析的库的信息存放在该节点上
l->id = ++gNextLibId;//
递增获取id
l->handle = hdl;//dlopen
返回的handle
strncpy(l->path, libPath, PATH_MAX);//
复制保存库路径名称信息
l->createFx = createFx;//
解析的创建音效控制接口的符号
l->releaseFx = releaseFx;//
解析的释放资源的符号
l->effects = descHead;//
库中的音效描述符(可能多个)链表头
pthread_mutex_init(&l->lock, NULL);

e = malloc(sizeof(list_elem_t));//分配一个节点内存
e->next = gLibraryList;//
添加到表头
e->object = l;//
指向解析出的库节点
gLibraryList = e;//
全局链表指向新添加的节点

对应的逆操作unloadLibrary则是释放上述为节点分配的内存,并在最后根据handle关闭对应的库文件。

调用者可以通过uuid调用EffectCreate来创建一个音效引擎,在没有初始化时则调用init函数装载解析库并构建全局链表,然后根据uuid查找其全局链表中是否有对应的音效引擎,找到后就调用引擎中创建函数创建引擎控制接口,最后还应该将该引擎添加到全局的一个链表gEffectList中,该链表维护着已经创建的全部音效引擎。

在路径frameworks/base/media/libeffects下面的存放了音效引擎源码,最后生成的三个引擎库:

$ls system/lib/soundfx/

libbundlewrapper.so libreverbwrapper.so libvisualizer.so

 

音效引擎的封装--EffectModule

前面提到过EffectModule是音效引擎的包裹(wrapper)。在EffectModule的构造函数中,根据uuid使用音效工厂中的EffectCreate函数得到音效引擎控制接口,存放于成员变量mEffectInterface:

AudioFlinger::EffectModule::EffectModule(const wp<ThreadBase>& wThread, const wp<AudioFlinger::EffectChain>& chain, effect_descriptor_t *desc, int id, int sessionId) : mThread(wThread), mChain(chain), mId(id), mSessionId(sessionId), mEffectInterface(NULL), mStatus(NO_INIT), mState(IDLE)
{
LOGV(“Constructor %p”, this);
int lStatus;
sp<ThreadBase> thread = mThread.promote();
if (thread == 0) {
return;
}
PlaybackThread *p = (PlaybackThread *)thread.get();
memcpy(&mDescriptor, desc, sizeof(effect_descriptor_t));
// create effect engine from effect factory
mStatus = EffectCreate(&desc->uuid, sessionId, p->id(), &mEffectInterface);//
根据构造函数的描述符参数中的uuid,通过工厂中的EffectCreate函数创建音效控制接口
if (mStatus != NO_ERROR) {
return;
}
lStatus = init();//
初始化,其实现是向音效引擎发送EFFECT_CMD_INIT命令
if (lStatus < 0) {
mStatus = lStatus;
goto Error;
}
LOGV(“Constructor success name %s, Interface %p”, mDescriptor.name, mEffectInterface);
return;

Error:
EffectRelease(mEffectInterface);//
出错后释放资源
mEffectInterface = NULL;
LOGV(“Constructor Error %d”, mStatus);
}

有了引擎控制接口,以后对音效引擎的操作就可使用该接口来进行,EffectModule其它成员函多是如此。

 

音效链--EffectChain

EffectChain1

以将多个音效串联起来,用以形成某种特殊的音效效果,EffectChain则表示串联起来的音效链。每个音轨(Track)上都可以添加由多个音效组成的音效链,如下图的EffectChain1EffectChain2

 

 

 

EffectChain3

 

Mix

 

 

 

 

 

EffectChain2

 

 

 

 

经过输出mixer线程(PlaybackThread)混合后,还可以在混合后的PCM数据上再添加音效处理,见音效链EffectChain3。对于处理这种混合后的全局性质的音效链,其sessionID必须为0。音效要添加哪个地方,取决于添加到audio session中;添加到音效链中何种具体位置,可以指定标志位依照一定的顺序规律添加;但通常不指定标志位,则按添加的先后顺序决定,先添加的排在前面,后添加的排在后面。

 

 

the EffectChain class represents a group of effects associated to one audio session. There can be any number of EffectChain objects per output mixer thread (PlaybackThread). The EffecChain with session ID 0 contains global effects applied to the output mix. Effects in this chain can be insert or auxiliary. Effects in other chains (attached to tracks) are insert only. The EffectChain maintains an ordered list of effect module, the order corresponding in the effect process order. When attached to a track (session ID != 0), it also provide it’s own input buffer used by the track as accumulation buffer.

An insert effect is an effect that can be plugged into one, and only one, channel.Aux inputs are used for effects that require a blend of effected and noneffected signals. Effects like reverb, chorus, and delay sound terrible when you hear only 100 percent effects. Aux effects are also used when you wish to use the same effect on more than one channel. An aux input is plugged into the master section aux input/output, and each channel has a control for how much of the effected signal to blend in with the dry, unaffected signal. This is great when you have hardware effects processors and you have to make the most out of a few pieces of gear. On the computer side, aux effects do the same as their hardware counterparts. You can usually have as many aux effects as you want in software, because you can reuse the same plug-in on multiple tracks.

 

参考:Insert Effects, Aux Inputs, and Buses

http://www.netplaces.com/home-recording/mixers/insert-effects-aux-inputs-and-buses.htm

Inserts vs Effects Sends – Which to use for whathttp://www.music-tech.com/headlines.php?subaction=showfull&id=1195154728&archive=&start_from=&ucat=1&

 

EffectHandle

当创建EffectHandle时,封装了音效引擎的EffectModule作为参数赋值给成员变量mEffect,这样以后对EffectHandle的调用转到对EffectModule的调用,从而调用到音效引擎。接下来,在构造函数中分配一块共享内存,内存的前面用于effect_param_cblk_t,后面1024字节用于mBuffer。构造函数的代码如下:

AudioFlinger::EffectHandle::EffectHandle(const sp<EffectModule>& effect, const sp<AudioFlinger::Client>& client, const sp<IEffectClient>& effectClient, int32_t priority): BnEffect(), mEffect(effect), mEffectClient(effectClient), mClient(client), mPriority(priority), mHasControl(false)
{
LOGV(“constructor %p”, this);
int bufOffset = ((sizeof(effect_param_cblk_t) – 1) / sizeof(int) + 1) * sizeof(int);//
整数对齐后的effect_param_cblk_t所占用字节数
mCblkMemory = client->heap()->allocate(EFFECT_PARAM_BUFFER_SIZE + bufOffset);//
调用MemoryDealer分配共享内存,大小为1024加上bufOffseteffect_param_cblk_t整数对齐后的大小)。effect_param_cblk_t占用的bufOffset大小位于共享内存的前面,后面1024字节是给控制块其它信息使用
if (mCblkMemory != 0) {
mCblk = static_cast<effect_param_cblk_t *>(mCblkMemory->pointer());//
共享内存基地址指针转换为effect_param_cblk_t 指针

if (mCblk) {
new(mCblk) effect_param_cblk_t();//
用构造函数初始化该块内存
mBuffer = (uint8_t *)mCblk + bufOffset;//mBuffer
指向effect_param_cblk_t后面的1024字节共享内存
}
} else {
LOGE(“not enough memory for Effect size=%u”, EFFECT_PARAM_BUFFER_SIZE + sizeof(effect_param_cblk_t));
return;
}
}

 

当调用command函数时,client侧程序需要向server侧传递参数数据,就是通过上述分配的共享内存来传递的。数据结构effect_param_cblk_t定义在头文件AudioEffectShared.h中:

struct effect_param_cblk_t
{
Mutex lock;
volatile uint32_t clientIndex; // Current read/write index for application
volatile uint32_t serverIndex; // Current read/write index for mediaserver
uint8_t* buffer; // start of parameter buffer
effect_param_cblk_t()
: lock(Mutex::SHARED), clientIndex(0), serverIndex(0) {}
};

 

播放线程对音效的管理

创建音效的函数首先进行有效性检查,然后根据session号检查是否已经存在对应的音效链,再在音效链中检查是否有对应的音效(EffectModule);若没有,则创建它们。之后,便创建EffectHandle,应用程序侧最终跨进程调用到它这儿。

// PlaybackThread::createEffect_l() must be called with AudioFlinger::mLock held
sp<AudioFlinger::EffectHandle> AudioFlinger::PlaybackThread::createEffect_l(
const sp<AudioFlinger::Client>& client,
const sp<IEffectClient>& effectClient,
int32_t priority,
int sessionId,
effect_descriptor_t *desc,//
根据此参数创建effect
int *enabled,
status_t *status
)
{
sp<EffectModule> effect;
sp<EffectHandle> handle;
status_t lStatus;
sp<Track> track;
sp<EffectChain> chain;
bool chainCreated = false;
bool effectCreated = false;
bool effectRegistered = false;

if (mOutput == 0) {//检查ouput
LOGW(“createEffect_l() Audio driver not initialized.”);
lStatus = NO_INIT;
goto Exit;
}

//Auxiliary effect必须添加到ID0session上,即mix之后的音频数据
// Do not allow auxiliary effect on session other than 0
if ((desc->flags & EFFECT_FLAG_TYPE_MASK) == EFFECT_FLAG_TYPE_AUXILIARY && sessionId != AudioSystem::SESSION_OUTPUT_MIX) {//SESSION_OUTPUT_MIX
即是0
LOGW(“createEffect_l() Cannot add auxiliary effect %s to session %d”, desc->name, sessionId);
lStatus = BAD_VALUE;
goto Exit;
}

// Do not allow effects with session ID 0 on direct output or duplicating threads
// TODO: add rule for hw accelerated effects on direct outputs with non PCM format
if (sessionId == AudioSystem::SESSION_OUTPUT_MIX && mType != MIXER) {//
若是直接输出(DirectOutput)或一变二(duplicated)的输出,也不允许添加音效处理
LOGW(“createEffect_l() Cannot add auxiliary effect %s to session %d”, desc->name, sessionId);
lStatus = BAD_VALUE;
goto Exit;
}

LOGV(“createEffect_l() thread %p effect %s on session %d”, this, desc->name, sessionId);

{ // scope for mLock
Mutex::Autolock _l(mLock);
// check for existing effect chain with the requested audio session
chain = getEffectChain_l(sessionId);//
检查是否已存在该session的音效链
if (chain == 0) {//
若没有
// create a new chain for this session
LOGV(“createEffect_l() new effect chain for session %d”, sessionId);

chain = new EffectChain(this, sessionId);//创建一个音效链

addEffectChain_l(chain);//将音效链添加到播放线程维护的列表中

chain->setStrategy(getStrategyForSession_l(sessionId));
chainCreated = true;
} else {//
若已存在该音效链
effect = chain->getEffectFromDesc_l(desc);//
获取对应的音效
}

LOGV(“createEffect_l() got effect %p on chain %p”, effect == 0 ? 0 : effect.get(), chain.get());

if (effect == 0) {//若还没有对应的音效
int id = mAudioFlinger->nextUniqueId();
// Check CPU and memory usage
lStatus = AudioSystem::registerEffect(desc, mId, chain->strategy(), sessionId, id);//
注册到音频策略中
if (lStatus != NO_ERROR) {
goto Exit;
}
effectRegistered = true;
// create a new effect module if none present in the chain
effect = new EffectModule(this, chain, desc, id, sessionId);//
创建对应的wrapper包裹类effect
lStatus = effect->status();
if (lStatus != NO_ERROR) {
goto Exit;
}
lStatus = chain->addEffect_l(effect);//
添加到音效链中
if (lStatus != NO_ERROR) {
goto Exit;
}
effectCreated = true;
effect->setDevice(mDevice);//
设置输出到何设备中
effect->setMode(mAudioFlinger->getMode());//
设置模式
}
// create effect handle and connect it to effect module
handle = new EffectHandle(effect, client, effectClient, priority);//
创建EffectHandle,即应用程序侧最终跨进程调用到EffectHandle
lStatus = effect->addHandle(handle);
if (enabled) {
*enabled = (int)effect->isEnabled();
}
}

Exit://出错后的逆操作
if (lStatus != NO_ERROR && lStatus != ALREADY_EXISTS) {
Mutex::Autolock _l(mLock);
if (effectCreated) {
chain->removeEffect_l(effect);
}
if (effectRegistered) {
AudioSystem::unregisterEffect(effect->id());
}
if (chainCreated) {
removeEffectChain_l(chain);
}
handle.clear();
}

if(status) {
*status = lStatus;
}
return handle;
}

    原文作者:Android源码分析
    原文地址: https://blog.csdn.net/mirkerson/article/details/40818639
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞