symba (779893), страница 20
Текст из файла (страница 20)
The second version allows you tocontrol the horizontal and vertical position of the video relative to theextent.You can set the horizontal and vertical positions to one of the predefined values defined by the THorizontalAlign and TVerticalAlign enumerations. These allow you to specify left, right, top,bottom, and centered alignment. Note that the scaling operation takesplace before alignment is performed, so the alignment operations onlyhave an effect when the selected scaling operation does not result in thewhole video extent being used by the scaled image.3The pixel aspect ratio describes the ratio between the number of pixels in the horizontaldirection and the number of pixels in the vertical direction.
Where the horizontal andvertical resolutions are the same, the pixels are square and the pixel aspect ratio is 1:1. Ifthe vertical resolution were to be twice that of the horizontal resolution, it would mean thepixels were twice as wide as they are high and the pixel aspect ratio would be 2:1. Whenscaling a video to fit a specified area on the screen, the resulting image may appear to bedistorted if the pixel aspect ratio is not maintained.80MULTIMEDIA FRAMEWORK: VIDEOYou can also set the alignment parameters to a numeric value whichrepresents a pixel offset from the top left corner of the extent. The pixeloffset can be both negative and positive. As before, the scaling operationis performed before the alignment. If you specify numeric alignmentparameters that result in part of the scaled image being outside the videoextent, the image is clipped so that only the part within the video extentis shown.CVideoPlayerUtility2 has an additional set of SetAutoScaleL() methods which allow the scaling to be set on a per-window basis:void SetAutoScaleL(const RWindowBase& aWindow, TAutoScaleType aScaleType);void SetAutoScaleL(const RWindowBase& aWindow, TAutoScaleType aScaleType,TInt aHorizPos, TInt aVertPos);It should be noted that the SetScaleFactorL() and SetAutoScaleL() methods cannot be used at the same time.
If a SetScaleFactorL() method is called, then any scaling requested by a previouscall to SetAutoScaleL() is removed and the new scale factors areapplied. Similarly, a call to a SetAutoScaleL() method causes anyscale factors set by a previous SetScaleFactorL() call to be removedbefore the automatic scaling is applied.4.6.8 RotationYou can specify that the original video picture should be rotated beforeit is displayed in the video extent.
This is useful if the video orientationVideo extentWindowDisplayFigure 4.9 RotationGETTING VIDEO INFORMATION81is not the same as the orientation of the display you want to use. Youcan rotate the picture 90◦ , 180◦ , or 270◦ . In Figure 4.9, you can see anexample of 90◦ rotation.The orientation of the video within a window can be set with theSetRotationL() method:void SetRotationL(TVideoRotation aRotation);If the video is being displayed in multiple windows, then the results ofthe rotation are seen in all the windows.The current orientation being used can be retrieved using the RotationL() method:TVideoRotation RotationL() const;For CVideoPlayerUtility2, a second SetRotationL() methodexists which allows the rotation to be specified on a per-window basis:void SetRotationL(const RWindowBase& aWindow, TVideoRotation aRotation);This version of the method can only be called after the video openingoperation has completed.A second RotationL() method also exists that allows the rotationfor a specific window to be retrieved:TVideoRotation RotationL(const RWindowBase& aWindow);4.6.9 Refreshing the FrameYou can ask for the current video frame to be refreshed on the displayusing RefreshFrameL().
This is useful if the video is paused and youneed to force it to redraw.void RefreshFrameL();4.7 Getting Video InformationFrame RateThe frame rate describes the number of images in the video per unit oftime. For instance, a frame rate of 15 frames per second (fps) indicatesthat the video contains 15 separate images for each second of that video.82MULTIMEDIA FRAMEWORK: VIDEOFor videos on mobile phones, frame rates ranging from 8 fps up to 30 fpsare typical.
For reference, a modern console video game would runat 30–60 fps and a video telephony call would have a frame rate of10–15 fps.To get the frame rate of the currently open video, use the VideoFrameRateL() method:TReal32 VideoFrameRateL() const;The frame rate is a floating-point number specifying the number offrames per second.
Generally, the controller returns a value that isspecified in the metadata of the file or stream being played, whichrepresents the frame rate you would expect if the video was decodedwithout any frames being dropped for performance reasons.Frame SizeThe frame size describes the size of each image within the video. This istypically indicated by giving the width and height of each image in pixels.So for instance a frame size of 640 × 480 indicates that each image hasa width of 640 pixels and a height of 480 pixels.The maximum video frame size that can be handled on a mobile phonedepends on the codec and whether it runs on its own processor.
Typicalframe sizes for videos on mobile phones range from QCIF (176 × 144)up to VGA (640 × 480). Advances in phone hardware should soon allowhigh definition frame sizes, such as 1280 × 720, to be used.To get the frame size of the currently open video, the VideoFrameSizeL() method can be used:void VideoFrameSizeL(TSize& aSize) const;MIME TypeThe MIME type of the video image data in the currently open video isavailable by calling the VideoFormatMimeType() method:const TDesC8& VideoFormatMimeType() const;DurationThe duration of the currently open video is available by calling theDurationL() method:TTimeIntervalMicroSeconds DurationL() const;ACCESSING METADATA83Bit RateThe bit rate describes how much information is contained within a videostream per unit of time.
It is generally given as a number of bits per second(bps), kilobits per second (kbps), or megabits per second (Mbps).The bit rate is important when streaming video over a network becausethe network medium can only transmit data at a certain maximum bitrate. Current mobile phones use various network media that run at variousdata rates, from the lower speed GSM and CDMA networks through tothe high speed HSDPA and WLAN networks.The frame size and frame rate of the video have a direct effect onthe bit rate, as does the amount of compression applied by the encodingcodec. The encoding codec uses a variety of techniques to compress avideo in order to meet the bit rate requirements of the network mediumover which it is being streamed.
Lower bit rates mean more of the originalvideo information is lost during the encoding and the video is of lowerquality when it is played.The bit rate is also used when encoding a video into a file in order tocontrol the size of that file. So a lower bit rate means a smaller file size atthe expense of lower video quality on playback.Table 4.2 shows typical bit rates for various data bearers along with anindication of the video frame sizes and rates that you might expect on aphone that supports that medium.Table 4.2 Data Bearer Video CapacitiesMediumBit rate (kbps)Frame size and rate48–64QCIF (176 × 144), 15 fps128–384QCIF (176 × 144), 30 fpsGPRS3GQVGA (320 × 240), 15 fpsWiFi500–1000QVGA (320 × 240), 30 fpsTo get the bit rate for the currently open video use the VideoBitRateL() method.TInt VideoBitRateL() const;This returns the average bit rate in bits per second.4.8 Accessing MetadataA video may contain extra items of information such as the video title orauthor.
This information is held in a form called metadata. The metadata84MULTIMEDIA FRAMEWORK: VIDEOformat depends on the type of video being played. If the controllersupports it, it is possible to read these items of metadata from the video file.You can read the number of metadata entries using the NumberOfMetaDataEntriesL() method:TInt NumberOfMetaDataEntriesL() const;This tells you the maximum number of metadata items you canread from the file. If the controller does not support metadata theneither the return value will be zero or the method will leave withKErrNotSupported.Each item of metadata can be read using the MetaDataEntryL()method and specifying the index of the entry that you want.CMMFMetaDataEntry* MetaDataEntryL(TInt aIndex) const;The returned CMMFMetaDataEntry item allows you to access themetadata item’s name and value:const TDesC& Name() const;const TDesC& Value() const;Once you have finished with the CMMFMetaDataEntry item youneed to delete it.4.9 Controlling the Audio Output4.9.1 Audio Resource ControlIn the CVideoPlayerUtility::NewL() method we saw there weretwo parameters: aPriority and aPref.
These are used to controlaccess to an audio resource such as a loudspeaker.The parameter, aPriority, is used in the situation where two clientsrequire access to the same sound device such as the loudspeaker. Whilea video is being played, its audio track uses the speaker. If, during thevideo playback, the calendar client wishes to play an audible tone toindicate an upcoming appointment, the priority field is used to determinewhether that tone can be played.
Normally the preferred client is the onethat sets the highest priority, however you should note that a client thathas the MultimediaDD capability takes preference over a client thatdoes not, even if that second client sets a higher priority. This behavior isexplained in more depth in Chapter 5.CONTROLLING THE AUDIO OUTPUT85The parameter, aPref, is used in the situation where the audio trackin a video is using a sound device, and some other client with a higherpriority wants to use it. In this situation, the audio track in the videowould either have to stop using the sound device, or might be mixedwith the other client’s audio.
The aPref parameter is used to determinewhether that is acceptable, or whether the whole video playback shouldstop and return an error code.Methods exist to change these parameters after the call to NewL() andto read the values back:void SetPriorityL(TInt aPriority, TMdaPriorityPreference aPref);void PriorityL(TInt& aPriority, TMdaPriorityPreference& aPref) const;If you try to play a video but the audio resource is already being usedby a higher priority client, or you are playing a video and the audioresource is lost to a higher priority client, then your observer methodMvpuoPlayComplete is called with the KErrInUse error code. Atthis point, you could indicate to the user in some way that the playbackhas been halted and wait for the user to restart the playback. This isnormally acceptable because the audio resource will have been lost foran obvious reason such as an incoming phone call and, once the phonecall has completed, the user can easily tell the video application toresume playback.From Symbian OS v9.2 onwards, if you decide that you would like toautomatically restart the video playing once the audio resource becomesavailable, then you can register for an audio notification:TInt RegisterAudioResourceNotification(MMMFAudioResourceNotificationCallback& aCallback,TUid aNotificationEventUid,const TDesC8& aNotificationRegistrationData);You need to pass a callback class which will receive the callback.To do this you need to derive your own class from the MMMFAudioResourceNotificationCallback class.