symba (779893), страница 15
Текст из файла (страница 15)
The number of frame sizessupported is in TCameraInfo::iNumVideoFrameSizesSupported.The sizes themselves are returned by the CCamera::EnumerateVideoFrameSizes() method.TInt format = iInfo.iVideoFrameFormatsSupported;// look for a format that the application can understand// (in this example we just show one format, YUV422)if (format & CCamera::EFormatYUV422){CCamera::TFormat format = CCamera::EFormatYUV422;for (TInt i=0; i<info.iNumVideoFrameSizesSupported; ++i){TSize size;iCamera->EnumerateVideoFrameSizes(size, i, format);iSizeArray.AppendL(size);}// Look for the most suitable size from the list; in this// example, we just choose the first one in the listTInt sizeIndex = 0;// iRateArray is defined elsewhere as RArray<TReal32>for (TInt i=0; i<info.iNumVideoFrameRatesSupported; ++i){TReal32 rate;iCamera->EnumerateVideoFrameRates(rate, i, format, sizeIndex);iRateArray.AppendL(rate);}}Once we have enumerated the video capture sizes, we can enumerate the video capture frame rate supported for the selected formatand size.
The number of frame rates supported is in TCameraInfo::iNumVideoFrameRatesSupported. The supported frame rates arereturned by the CCamera::EnumerateVideoFrameRates() method.3.6.2 Preparing for Video CaptureBefore we can capture video, we need to prepare the CCamera objectusing the CCamera::PrepareVideoCaptureL() method. This allowsCAPTURING VIDEO51the camera subsystem to allocate any memory necessary and performany other setup required to capture video.iCamera->PrepareVideoCaptureL(aFormat, aSizeIndex, aRateIndex,aBuffersToUse, aFramesPerBuffer);• If video is not supported, then the function leaves with the errorKErrNotSupported.• Video settings should be used for capturing video only; for imagecapture, the camera should be prepared separately (see Section 3.5).• Video capture cannot take place while a still image capture is active.3.6.3 Capturing VideoVideo capture is an asynchronous operation.
It is instantiated simply bymaking a call to CCamera::StartVideoCapture().iCamera->StartVideoCapture();The camera then fills the buffers as frames become available. Oncea buffer has been filled, you receive a callback to MCameraObserver2::VideoBufferReady() with the buffer in an MCameraBuffer object. If an error occurs with video capture, then the client isnotified by MCameraObserver2::VideoBufferReady() passing anerror, in which case no valid frame data is included.If the error is fatal to the process of capturing video, such as the camerabeing switched off, then video capture stops and outstanding buffers aredeleted by the camera object.If the camera runs out of frame buffers, then there will be nocallbacks until you call MCameraBuffer::Release(), after whichMCameraObserver2::VideoBufferReady() will start being calledagain.You can use multiple sequences of CCamera::StartVideoCapture() and CCamera::StopVideoCapture() calls following a single call to CCamera::PrepareVideoCaptureL().In the callback, you need to extract the data from the MCameraBuffer object, as a descriptor, a bitmap or a handle to a kernel chunk.In all three cases, the camera retains ownership of the data.
Oncethe data has been used, the buffer should be released by calling itsRelease() function, which indicates that the camera may reuse ordelete it (or call MFrameBuffer::Release() with MCameraObserver::FrameBufferReady()).52THE ONBOARD CAMERAThe buffer format returned depends on the format selected. Either theimage data is available as a bitmap (owned by the Font and BitmapServer) or it is available as a descriptor and chunk (the descriptor refers tothe memory in the chunk). An attempt to access an inappropriate formatwill cause the function to leave.For the purpose of synchronization, a buffer provides the index of thestarting frame in the buffer and the elapsed time (timestamp) since thevideo capture started.
It is assumed that the frames within the buffer havebeen captured sequentially at the requested frame rate. Their individualtimestamps can be calculated as a function of their index, capture rateand first frame time latency. In cases where considerable jitter is expectedor observed, it may be better to have a single frame in a buffer.void CCameraAppUi::CaptureVideo(){iCamera->StartVideoCapture();}//MCameraObservervoid CCameraAppUi::FrameBufferReady(MFrameBuffer* aFrameBuffer,TInt aError);//MCameraObserver2void CCameraDemoAppUi::VideoBufferReady(MCameraBuffer& aCameraBuffer,TInt aError);3.7 Error HandlingThe permitted camera calls depend on the state of the camera.
Illegalcalls are made if:• We programmed incorrectly. We may repeat a call unnecessarily, suchas by calling Reserve() when the camera is already successfullyreserved.• The camera has been seized by another client and we have notreceived a notification, if we are using MCameraObserver ratherthan MCameraObserver2.If we make an illegal call using a method that can leave, the functionleaves with the error code; for example, calling PrepareImageCaptureL() while video is being captured or PrepareVideoCaptureL()while an image is being captured will leave with KErrInUse.If we use a method which has a callback, the error code is returned inthe callback; for example if we call CaptureImage() or StartVideoCapture() without a successful previous call to PrepareImageCaptureL() or PrepareVideoCaptureL(), respectively, we get acallback with the error KErrNotReady.ADVANCED TOPICS53If we call the Reserve() method and a higher priority client is incontrol of the camera, we get a callback with the error KErrAccessDenied.If we make illegal calls that cannot leave and have no callback,they are simply ignored; for example, StopVideoCapture() andPowerOff().
This makes these methods safe to use in the destructors ofclasses, which reduces the risk of the camera being left on accidentally.More information about the error codes can be found in the methoddescriptions and ecamerrors.h file.3.8 Advanced Topics3.8.1 Secondary ClientsMultiple clients may share the camera if the secondary clients createtheir camera objects using the CCamera::NewDuplicateL() factoryfunction.
This can only be called with the handle of an existing cameraobject, read using CCamera::Handle(), and lets more than one clientmake CCamera::Reserve() calls at once. This is usually done byMMF video controllers, for instance, which are passed the handle ofthe camera that the camera application is using to display the viewfinder.The secondary client (using the CCamera::NewDuplicateL() call)assumes the priority of the primary client (the one which generatedthe camera handle using CCamera::NewL()). Each client calls CCamera::Reserve() and must call CCamera::Release() once theyno longer wish to use camera functionality.
Even if the primary clientfinishes, a secondary client retains camera control until it calls CCamera::Release(). If a higher priority client gains control, all the clientsreceive individual MCameraObserver2::HandleEvent() notifications.3.8.2 Innovative Applications Created for Symbian SmartphonesThere have been many innovative uses of the camera on Symbiansmartphones. A few examples are as follows:• The Mosquitoes game from Ojom used the camera as a live backdrop for its game play.
This created an extremely novel gameplayexperience.• RealEyes3d (www.realeyes3d.com) have a product that uses motiontracking with the camera to create an intuitive input device forapplication navigation.54THE ONBOARD CAMERA• Bar-code readers have been created from image processing on cameraviewfinder frames. In Japan, 2D bar codes have been used to containweb links to information about products, such as previews for musicor videos.
Further information about the use of bar codes can be foundin Chapter 6 of Games on Symbian OS by Jo Stichbury et al. (2008)(developer.symbian.com/gamesbook ).Nokia’s Computer Vision Library is freely available from research.nokia.com/research/projects/nokiacv/ and has some open source demonstration applications, which makes it a good starting point for creatinginnovative applications for S60 smartphones.4Multimedia Framework: VideoIn this chapter, we look at the Symbian OS video architecture and howyou can use it. We begin with details about some general video conceptsand look at the various levels of the software architecture.
We thendescribe the client-side APIs that allow you to perform various videoplayback and recording operations.4.1 Video ConceptsBefore explaining the Symbian OS video architecture, let’s get familiarwith a few important video concepts.A video is a series of still images which can be displayed one afteranother in sequence to show a scene which changes over time. Typically,the images have an accompanying audio track which is played at thesame time as the images. The term ‘video’ is often taken to mean thiscombination of images and associated audio.4.1.1 Delivering Video to a DeviceThere are two main ways that a video can be delivered to a mobiledevice:• The video data could be stored in a file that is downloaded to thedevice.
When playback is requested, the file is opened and the datais extracted and played.• The video data could be streamed over a network to the device, whichplays it as it is received.56MULTIMEDIA FRAMEWORK: VIDEO4.1.2 Video RecordingOn Symbian smartphones with a built-in camera and microphone, it ispossible to record a video. The video and audio data can be recordedto a file for subsequent playback or could be streamed off the device forstorage on a server elsewhere.4.1.3 Video Storage FormatsVideo and audio data stored in a file, or received over a streaming link,is often encoded in order to reduce the size of video files or the amountof data that must be streamed. The job of encoding and decoding thedata is handled by a device or program called a codec. Encoding anddecoding video is a very processor-intensive operation, so the codec mayreside on a separate dedicated processor or on a general-purpose DigitalSignal Processor (DSP) to ensure timely processing of the video withoutimpacting the performance of the rest of the system.There are a number of common encoded data formats that you maycome across.
For video image data, you may see MPEG2, H.263, MPEG4and H.264. Each of these formats define a set of methods that can beused to encode the image data. Generally the newer formats, such asH.264, provide better levels of compression than older formats, such asMPEG2, especially as they define sets of levels and profiles which meandifferent encoding methods can be used depending on the type of imagedata being encoded.For audio data you may come across formats such as MP3, AAC andAMR. MP3 is a widely used format for music files.