Android camera拍照流程

在前面的文章,我们了解到了预览流程是怎样的,那么拍照,又是如何进行的呢,下面我们一起来了解一下。

APP

当我们点击拍照按钮的时候,在APP中,将会创建一个拍照请求,在这里,将会设置分辨率等参数,而后创建请求,下面我们从 OneCameraImpl::takePicture() 开始跟踪。

OneCameraImpl::takePicture()
    OneCameraImpl::takePictureNow()
        
    /** * Take picture immediately. Parameters passed through from takePicture(). */
    public void takePictureNow(PhotoCaptureParameters params, CaptureSession session) {
        long dt = SystemClock.uptimeMillis() - mTakePictureStartMillis;
        Log.d(TAG, "Taking shot with extra AF delay of " + dt + " ms.");
        try {
            // JPEG capture.
            /* 1. 创建拍照请求 */
            CaptureRequest.Builder builder = mDevice
                    .createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
            /* 2. 得到底层反馈回来的请求模板之后,根据应用的需要,设置模式等 */
            builder.setTag(RequestTag.CAPTURE);
            addBaselineCaptureKeysToRequest(builder);
        
            // Enable lens-shading correction for even better DNGs.
            if (sCaptureImageFormat == ImageFormat.RAW_SENSOR) {
                builder.set(CaptureRequest.STATISTICS_LENS_SHADING_MAP_MODE,
                        CaptureRequest.STATISTICS_LENS_SHADING_MAP_MODE_ON);
            } else if (sCaptureImageFormat == ImageFormat.JPEG) {
                builder.set(CaptureRequest.JPEG_QUALITY, JPEG_QUALITY);
                builder.set(CaptureRequest.JPEG_ORIENTATION,
                        CameraUtil.getJpegRotation(params.orientation, mCharacteristics));
            }
        
            /* 3. 增加surface,为什么这里会有两个呢? * 因为拍照的时候,一样的需要预览,所以一个是预览的surface, * 一个是ImageReader 的 surface */
            builder.addTarget(mPreviewSurface);
            builder.addTarget(mCaptureImageReader.getSurface());
            CaptureRequest request = builder.build();
        
            if (DEBUG_WRITE_CAPTURE_DATA) {
                final String debugDataDir = makeDebugDir(params.debugDataFolder,
                        "normal_capture_debug");
                Log.i(TAG, "Writing capture data to: " + debugDataDir);
                CaptureDataSerializer.toFile("Normal Capture", request, new File(debugDataDir,
                        "capture.txt"));
            }
        
            /* 4. 将请求添加到Framework */
            mCaptureSession.capture(request, mCaptureCallback, mCameraHandler);
        } catch (CameraAccessException e) {
            Log.e(TAG, "Could not access camera for still image capture.");                                                                                                                                  
            broadcastReadyState(true);
            params.callback.onPictureTakingFailed();
            return;
        }   
        synchronized (mCaptureQueue) {
            mCaptureQueue.add(new InFlightCapture(params, session));
        }   
    }  

在创建拍照请求时,将会调用到 AndroidCameraDeviceProxy::createCaptureRequest() 进入android Framework,而 TEMPLATE_STILL_CAPTURE 则是捕获一张图像。这里又是经过 CameraDeviceImpl::createCaptureRequest() 通过 mRemoteDevice 调用 createDefaultRequest()。其实这个创建请求,与创建预览请求类似最终设置到HAL,不在详细跟踪。

而在APP中,通过 mCaptureSession.capture() 调用到 CameraCaptureSessionImpl::capture(),从而进入Framework。

下面我们来看看,Framework又是怎么完成拍照功能的。

Framework

[impl/CameraCaptureSessionImpl.java]
	@Override
    public int capture(CaptureRequest request, CaptureCallback callback,
            Handler handler) throws CameraAccessException {
        checkCaptureRequest(request);
             
        synchronized (mDeviceImpl.mInterfaceLock) {
            checkNotClosed();
             
            handler = checkHandler(handler, callback);
            
            /* 显然,在该函数中,经过一系列的参数检查之后,将调用 mDeviceImpl.capture() */
            return addPendingSequence(mDeviceImpl.capture(request,
                    createCaptureCallbackProxy(handler, callback), mDeviceExecutor));
        }    
    }

[impl/CameraDeviceImpl.java]
    public int capture(CaptureRequest request, CaptureCallback callback, Executor executor)                                                                                 
            throws CameraAccessException {

        List<CaptureRequest> requestList = new ArrayList<CaptureRequest>();
        requestList.add(request);
        /* 我们只是拍一张照片,所以第三个参数为 false,不会重复提交这个请求 */
        return submitCaptureRequest(requestList, callback, executor, /*streaming*/false);
    }  

在 CameraDeviceImpl::submitCaptureRequest() 中,检查参数之后,又将通过 mRemoteDevice.submitRequestList() 将请求调用到 CameraService进行处理,剩下的这些操作,和预览时的流程类似,不再详细跟踪。

接下来,我们看看,当APP拿到底层的请求返回时,又进行了什么操作?

捕获请求返回时

在上面的 OneCameraImpl::takePictureNow() 函数我们可以了解到,callback设置为 CameraCaptureSession.CaptureCallback 对象。捕获请求完成时,将调用到下面的函数。

[OneCameraImpl.java]
	/** * Common listener for preview frame metadata. */
    private final CameraCaptureSession.CaptureCallback mCaptureCallback =
            new CameraCaptureSession.CaptureCallback() {
				...
                @Override
                public void onCaptureCompleted(CameraCaptureSession session,
                        CaptureRequest request, TotalCaptureResult result) {
                    ...
     
                    /* 当发现该请求是CAPTURE 时,将进行下面的操作 */
                    if (request.getTag() == RequestTag.CAPTURE) {
                        // Add the capture result to the latest in-flight
                        // capture. If all the data for that capture is
                        // complete, store the image on disk.
                        InFlightCapture capture = null;
                        synchronized (mCaptureQueue) {
                            if (mCaptureQueue.getFirst().setCaptureResult(result)
                                    .isCaptureComplete()) {
                                capture = mCaptureQueue.removeFirst();
                            }
                        }
                        if (capture != null) {
                            /* 这里将保存图像数据 */
                            OneCameraImpl.this.onCaptureCompleted(capture);
                        }
                    }
                    super.onCaptureCompleted(session, request, result);
                }
    }

由于很多的参数与预览的流程类似,这里就不再详细跟踪。

点赞