BootAnimation分析(一)

android开机有很多画面显示,从boot到kernel,再到最后的bootAnimation,每一部分显示都用到了不同的方法。一直都感觉图像显示相关的内容比较有意思,加上最近有做跟BootAnimation相关的task,就分析一下BootAnimation的流程用于记录,非常短小精悍的一段程序,但是涉及到的知识点却很多:opengl&egl接口使用,surfaceflinger,property service,binder以及其自身的业务逻辑。

bootanimation作为一个bin档,是由init进程创建的,main函数很简单:

int main(int argc, char** argv)
{
#if defined(HAVE_PTHREADS)
    setpriority(PRIO_PROCESS, 0, ANDROID_PRIORITY_DISPLAY);
#endif

    char value[PROPERTY_VALUE_MAX];
    property_get("debug.sf.nobootanimation", value, "0");
    int noBootAnimation = atoi(value);
    ALOGI_IF(noBootAnimation,  "boot animation disabled");
    if (!noBootAnimation) {

        sp<ProcessState> proc(ProcessState::self());
        ProcessState::self()->startThreadPool();

        // create the boot animation object
        sp<BootAnimation> boot = new BootAnimation();

        IPCThreadState::self()->joinThreadPool();

    }
    return 0;
}

其中有两个操作:1.ProcessState::self()->startThreadPool()

2.IPCThreadState::self()->joinThreadPool()

在android的很多native的程序中都会出现他俩,就谈下个人对这两个函数的理解吧:这两个类是定义在Framework的native的binder的lib中的,也就说这两个类是涉及到binder通信的工具类。这里都是通过self静态方法获取对象,也就是都是单例模式,ProcessState是整个进程只存在一个对象;IPCThreadState则是线程相关的,是一个线程局部静态变量。

void ProcessState::startThreadPool()
{
    AutoMutex _l(mLock);
    if (!mThreadPoolStarted) {
        mThreadPoolStarted = true;
        spawnPooledThread(true);//产生<池子线程>
    }
}
void ProcessState::spawnPooledThread(bool isMain)
{
    if (mThreadPoolStarted) {
        String8 name = makeBinderThreadName();
        ALOGV("Spawning new pooled thread, name=%s\n", name.string());
        sp<Thread> t = new PoolThread(isMain);
        t->run(name.string());
    }
}
class PoolThread : public Thread
{
public:
    PoolThread(bool isMain)
        : mIsMain(isMain)
    {
    }
    
protected:
    virtual bool threadLoop()
    {
        IPCThreadState::self()->joinThreadPool(mIsMain);
        return false;
    }
    
    const bool mIsMain;
};

注意前面传入的参数isMain都是true,在threadLoop中调用了线程局部变量的joinThreadPool且传入参数是true,然后return false,表明这个线程只会执行一次就结束掉。但是如果joinThreadPool退不出来的话,这个线程也是结束不了的。

其实到现在也还不知道startThreadPool到底做了啥!

void IPCThreadState::joinThreadPool(bool isMain)//这个函数是带默认参数为true的
{
    LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid());

    //先往mOut中往binder驱动写入指令
    //如果是主线程的话发送BC_ENTER_LOOPER
    //如果是一般线程的话发送BC_REGISTER_LOOPER
    mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
    
    // This thread may have been spawned by a thread that was in the background
    // scheduling group, so first we will make sure it is in the foreground
    // one to avoid performing an initial transaction in the background.
    set_sched_policy(mMyThreadId, SP_FOREGROUND);
        
    status_t result;
    do {
        processPendingDerefs();
        // now get the next command to be processed, waiting if necessary
        //这个函数一开始会调用talkwithdriver,会将前面往mOut中写入的东西传给driver
        result = getAndExecuteCommand();//主要处理函数

        if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {
            ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting",
                  mProcess->mDriverFD, result);
            abort();
        }
        
        // Let this thread exit the thread pool if it is no longer
        // needed and it is not the main process thread.
        if(result == TIMED_OUT && !isMain) {
            break;
        }
    } while (result != -ECONNREFUSED && result != -EBADF);//只要处理的结果不属于这两类就继续去从driver等待事件

//这种先往mOut里写东西,然后调用talkwithdriver就是往binder driver写入数据,通知当前线程退出线程池
    mOut.writeInt32(BC_EXIT_LOOPER);
    talkWithDriver(false);
}

然后转向核心函数:getAndExecuteCommand()

status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;

/**
step1、从driver读出cmd
*/
    result = talkWithDriver();//这个函数其实出镜率挺高的,dump进程状态的时候,经常有这个函数出现在backtrace中。默认参数是true
    if (result >= NO_ERROR) {
        size_t IN = mIn.dataAvail();
        if (IN < sizeof(int32_t)) return result;
        cmd = mIn.readInt32();
        IF_LOG_COMMANDS() {
            alog << "Processing top-level Command: "
                 << getReturnString(cmd) << endl;
        }

        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount++;//增加用于在处理binder的线程的数量
        pthread_mutex_unlock(&mProcess->mThreadCountLock);
/**
step2、执行command,这里感觉就是针对service的进程的使用
*/
        result = executeCommand(cmd);

        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount--;
        pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
        pthread_mutex_unlock(&mProcess->mThreadCountLock);

        // After executing the command, ensure that the thread is returned to the
        // foreground cgroup before rejoining the pool.  The driver takes care of
        // restoring the priority, but doesn't do anything with cgroups so we
        // need to take care of that here in userspace.  Note that we do make
        // sure to go in the foreground after executing a transaction, but
        // there are other callbacks into user code that could have changed
        // our group so we want to make absolutely sure it is put back.
        set_sched_policy(mMyThreadId, SP_FOREGROUND);
    }

    return result;
}

接下来查看talkwithdriver是怎么和driver交互的:

这个binder_write_read是bionic库中定义的一个结构,用于记录binder的读写的状态。mIn和mOut代表的是进这个线程的数据和出这个线程的数据,即从binder收到的数据和需要往外写的数据,这两个parcel也应该是线程的局部变量。

struct binder_write_read {
61  binder_size_t write_size;
62  binder_size_t write_consumed;
63/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */
64  binder_uintptr_t write_buffer;
65  binder_size_t read_size;
66  binder_size_t read_consumed;
67  binder_uintptr_t read_buffer;
68/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */
69};
//step1、这个函数实际上是能够同时读取和写入的,也就是同时首发数据,只要doreceive设为true就会强制性收数据,接收数据就是去读取当前进程上的所有的todo队列上的数据
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    if (mProcess->mDriverFD <= 0) {
        return -EBADF;
    }
    
    binder_write_read bwr;//这个数据会通过ioctl交给driver使用
    
    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();//表明mIn这个parcel已经被读完了
    
    // We don't want to write anything if we are still reading
    // from data left in the input buffer and the caller
    // has requested to read the next data.
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
    //填写需要写入的数据
    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();

    // This is what we'll read.
//填写读取出来的数据存放的位置
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }

    ...
    // Return immediately if there is nothing to do.
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
...
#if defined(HAVE_ANDROID_OS)
//通过IOCTL跟binder的driver交互,ioctl之后会修改write_consumed和read_consumed两个变量
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
        if (mProcess->mDriverFD <= 0) {
            err = -EBADF;
        }
    } while (err == -EINTR);
//根据从driver返回的数据来更新mIn和mOut的dataposition,
    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else
                mOut.setDataSize(0);
        }
        if (bwr.read_consumed > 0) {
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }
        ...
        return NO_ERROR;
    }
    
    return err;
}

step2:执行cmd,executeCommand函数处理的case很多,因为binder driver传上来的cmd有很多种,这里就选择其中一种也是最主要的也就是最常用的BR_TRANSACTION

case BR_TRANSACTION://这个是做数据传输后的接收命令
        {
            binder_transaction_data tr;
            result = mIn.read(&tr, sizeof(tr));//将传进来的parcel读进tr数据中
            ALOG_ASSERT(result == NO_ERROR,
                "Not enough command data for brTRANSACTION");
            if (result != NO_ERROR) break;
            
            Parcel buffer;//然后将tr转入到parcel中
            buffer.ipcSetDataReference(
                reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                tr.data_size,
                reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                tr.offsets_size/sizeof(binder_size_t), freeBuffer, this);
            
            const pid_t origPid = mCallingPid;
            const uid_t origUid = mCallingUid;
            const int32_t origStrictModePolicy = mStrictModePolicy;
            const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags;

//这里可以看到通过binder驱动是能够得到调用该数据的pid数据的,也难怪Binder的API中有获取pid这类的API
            mCallingPid = tr.sender_pid;
            mCallingUid = tr.sender_euid;
            mLastTransactionBinderFlags = tr.flags;

            int curPrio = getpriority(PRIO_PROCESS, mMyThreadId);
            if (gDisableBackgroundScheduling) {
                if (curPrio > ANDROID_PRIORITY_NORMAL) {
                    // We have inherited a reduced priority from the caller, but do not
                    // want to run in that state in this process.  The driver set our
                    // priority already (though not our scheduling class), so bounce
                    // it back to the default before invoking the transaction.
                    setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL);
                }
            } else {
                if (curPrio >= ANDROID_PRIORITY_BACKGROUND) {
                    // We want to use the inherited priority from the caller.
                    // Ensure this thread is in the background scheduling class,
                    // since the driver won't modify scheduling classes for us.
                    // The scheduling group is reset to default by the caller
                    // once this method returns after the transaction is complete.
                    set_sched_policy(mMyThreadId, SP_BACKGROUND);
                }
            }

            //ALOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid);

            Parcel reply;//这个是需要回传给client端的数据
            status_t error;
            IF_LOG_TRANSACTIONS() {
                TextOutput::Bundle _b(alog);
                alog << "BR_TRANSACTION thr " << (void*)pthread_self()
                    << " / obj " << tr.target.ptr << " / code "
                    << TypeCode(tr.code) << ": " << indent << buffer
                    << dedent << endl
                    << "Data addr = "
                    << reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer)
                    << ", offsets addr="
                    << reinterpret_cast<const size_t*>(tr.data.ptr.offsets) << endl;
            }
/***********************这部分的BBinder来源还不明*****************************/
            if (tr.target.ptr) {
                sp<BBinder> b((BBinder*)tr.cookie);//binder传输中的cookie就是用来指向用户空间中的binder对象,数据返回到client端的时候也需要找到原来发送此消息的binder对象。
//调用BBinder的transact去处理这个
                error = b->transact(tr.code, buffer, &reply, tr.flags);

            } else {
                error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
            }
/*************************************************************************/
            //ALOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n",
            //     mCallingPid, origPid, origUid);
            
            if ((tr.flags & TF_ONE_WAY) == 0) {
                LOG_ONEWAY("Sending reply to %d!", mCallingPid);
                if (error < NO_ERROR) reply.setError(error);
//返回reply回去
                sendReply(reply, 0);
            } else {
                LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
            }
            
            mCallingPid = origPid;
            mCallingUid = origUid;
            mStrictModePolicy = origStrictModePolicy;
            mLastTransactionBinderFlags = origTransactionBinderFlags;

            IF_LOG_TRANSACTIONS() {
                TextOutput::Bundle _b(alog);
                alog << "BC_REPLY thr " << (void*)pthread_self() << " / obj "
                    << tr.target.ptr << ": " << indent << reply << dedent << endl;
            }
            
        }
        break;

到此就知道ProcessState::self()->startThreadPool()的实作就是创建开启一个线程,然后通知binder driver,然后这个池线程去talkwithdriver,然后executecmd。

接下来的main函数又调用了一次IPCThreadState::self()->joinThreadPool(),将这个主线程也进入到了talkwithdriver的状态。

看了上面的main函数中跟binder线程相关的操作,现在有三个疑问:

1、在IPC的C/S架构中,针对client端的请求会同时存在多个,那么S端是怎么开出来多个thread去处理的呢?

答:是通过binder driver往service发送BR_SPAWN_LOOPER cmd,然后线程去调用mProcess->spawnPooledThread(false)来开辟新的线程的。

case BR_SPAWN_LOOPER:
        mProcess->spawnPooledThread(false);
        //1、这里false和true的区别在于线程起来的时候往binder driver中写入的数据
        //2、超时的时候是否会退出while循环
        break;

2、在收到transaction的命令之后,获得的BBinder是从哪里来?

答:这个BBinder肯定是某个BnService,但是问题就出在,是在什么地方设置这个鬼的(未解待续,如果有了解的希望能指点一下)

3、前面看到这两个操作是为了接收binder driver数据,有可能是作为service的用途,但是tmd bootanimation没有要作为service的意思,他只是作为SF的client,为毛还要做这两步呢?去掉不行么?

答:如果作为一个客户端的话,传输数据本质是调用IPCThreadState的transact函数,其中的套路:先往mOut中写数据—>talkwithdriver来发送数据,这个过程中必然要求binder文件已经被打开之类,但是在这个流程中肯定会打开binder文件,不论你之前是否调用了ProcessState::self()->startThreadPool()或者IPCThreadState::self()->joinThreadPool()。所以我的意见是删掉也还是能跑起来(这个等有编译环境之后再验证删掉功能是否正常)。

到此就为一个native的应用在main函数中开启binder线程的功能做了一定分析,按这个讨论就能更好地去看其他native的程序了,如servicemanager、surfaceflinger等。但可能存在不完善或者理解不到位的地方,希望得到批评指正,感谢。

    原文作者:Kelvin wu
    原文地址: https://zhuanlan.zhihu.com/p/22784063
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞