使用MonoTouch在iOS中进行视频捕获

我有代码在Objective-C中创建,配置和启动视频捕获会话而没有问题.我将示例移植到C#和MonoTouch 4.0.3并遇到一些问题,这里是代码:

    void Initialize ()
    {   
        // Create notifier delegate class 
        captureVideoDelegate = new CaptureVideoDelegate(this);

        // Create capture session
        captureSession = new AVCaptureSession();
        captureSession.SessionPreset = AVCaptureSession.Preset640x480;

        // Create capture device
        captureDevice = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Video);

        // Create capture device input
        NSError error;
        captureDeviceInput = new AVCaptureDeviceInput(captureDevice, out error);
        captureSession.AddInput(captureDeviceInput);

        // Create capture device output
        captureVideoOutput = new AVCaptureVideoDataOutput();
        captureSession.AddOutput(captureVideoOutput);
        captureVideoOutput.VideoSettings.PixelFormat = CVPixelFormatType.CV32BGRA;
        captureVideoOutput.MinFrameDuration = new CMTime(1, 30);
        //
        // ISSUE 1
        // In the original Objective-C code I was creating a dispatch_queue_t object, passing it to
        // setSampleBufferDelegate:queue message and worked, here I could not find an equivalent to 
        // the queue mechanism. Also not sure if the delegate should be used like this).
        //
        captureVideoOutput.SetSampleBufferDelegatequeue(captureVideoDelegate, ???????);

        // Create preview layer
        previewLayer = AVCaptureVideoPreviewLayer.FromSession(captureSession);
        previewLayer.Orientation = AVCaptureVideoOrientation.LandscapeRight;
        //
        // ISSUE 2:
        // Didn't find any VideoGravity related enumeration in MonoTouch (not sure if string will work)
        //
        previewLayer.VideoGravity = "AVLayerVideoGravityResizeAspectFill";
        previewLayer.Frame = new RectangleF(0, 0, 1024, 768);
        this.View.Layer.AddSublayer(previewLayer);

        // Start capture session
        captureSession.StartRunning();

    }

    #endregion

    public class CaptureVideoDelegate : AVCaptureVideoDataOutputSampleBufferDelegate
    {
        private VirtualDeckViewController mainViewController;

        public CaptureVideoDelegate(VirtualDeckViewController viewController)
        {
            mainViewController = viewController;
        }

        public override void DidOutputSampleBuffer (AVCaptureOutput captureOutput, CMSampleBuffer sampleBuffer, AVCaptureConnection connection)
        {
            // TODO: Implement - see: http://go-mono.com/docs/index.aspx?link=T%3aMonoTouch.Foundation.ModelAttribute

        }
    }

问题1:
不确定如何正确使用SetSampleBufferDelegatequeue方法中的委托.也找不到与dispatch_queue_t对象相同的机制,该对象在Objective-C中正常工作以传入第二个参数.

问题2:
我没有在MonoTouch库中找到任何VideoGravity枚举,不确定传递具有常量值的字符串是否有效.

我已经找到了解决这个问题的任何线索,但没有明确的样本.任何有关如何在MonoTouch中执行相同操作的示例或信息都将受到高度赞赏.

非常感谢.

最佳答案 这是我的代码.好好利用它.我只是删除了重要的东西,所有的初始化都在那里,以及读取样本输出缓冲区.

然后我有代码处理CVImageBuffer形式一个链接的自定义ObjC库,如果你需要在Monotouch中处理它,那么你需要加倍努力并将其转换为CGImage或UIImage.在Monotouch(AFAIK)中没有这个功能,所以你需要自己绑定它,从普通的ObjC. ObjC中的样本如下:how to convert a CVImageBufferRef to UIImage

public void InitCapture ()
        {
            try
            {
                // Setup the input
                NSError error = new NSError ();
                captureInput = new AVCaptureDeviceInput (AVCaptureDevice.DefaultDeviceWithMediaType (AVMediaType.Video), out error); 

                // Setup the output
                captureOutput = new AVCaptureVideoDataOutput (); 
                captureOutput.AlwaysDiscardsLateVideoFrames = true; 
                captureOutput.SetSampleBufferDelegateAndQueue (avBufferDelegate, dispatchQueue);
                captureOutput.MinFrameDuration = new CMTime (1, 10);

                // Set the video output to store frame in BGRA (compatible across devices)
                captureOutput.VideoSettings = new AVVideoSettings (CVPixelFormatType.CV32BGRA);

                // Create a capture session
                captureSession = new AVCaptureSession ();
                captureSession.SessionPreset = AVCaptureSession.PresetMedium;
                captureSession.AddInput (captureInput);
                captureSession.AddOutput (captureOutput);

                // Setup the preview layer
                prevLayer = new AVCaptureVideoPreviewLayer (captureSession);
                prevLayer.Frame = liveView.Bounds;
                prevLayer.VideoGravity = "AVLayerVideoGravityResize"; // image may be slightly distorted, but red bar position will be accurate

                liveView.Layer.AddSublayer (prevLayer);

                StartLiveDecoding ();
            }
            catch (Exception ex)
            {
                Console.WriteLine (ex.ToString ());
            }
        }

public void DidOutputSampleBuffer (AVCaptureOutput captureOutput, MonoTouch.CoreMedia.CMSampleBuffer sampleBuffer, AVCaptureConnection connection)
        {   
            Console.WriteLine ("DidOutputSampleBuffer: enter");

            if (isScanning) 
            {
                CVImageBuffer imageBuffer = sampleBuffer.GetImageBuffer (); 

                Console.WriteLine ("DidOutputSampleBuffer: calling decode");

                //      NSLog(@"got image w=%d h=%d bpr=%d",CVPixelBufferGetWidth(imageBuffer), CVPixelBufferGetHeight(imageBuffer), CVPixelBufferGetBytesPerRow(imageBuffer));
                // call the decoder
                DecodeImage (imageBuffer);
            }
            else
            {
                Console.WriteLine ("DidOutputSampleBuffer: not scanning");
            }

            Console.WriteLine ("DidOutputSampleBuffer: quit");
        } 
点赞