Wiley.Games.on.Symbian.OS.A.Handbook.for.Mobile.Development.Apr.2008 (779888), страница 42
Текст из файла (страница 42)
Much of the workin this area is associated with academic research initiatives, originallyclosely linked to ubiquitous and nomadic computing. Perhaps becauseof these links, they have only recently started to use mobile phones;examples being the SMS game Day of the Figurine’,4 which could alsobe regarded as an alternate reality game and Insectopia,5 which is basedaround Bluetooth proximity – a mechanism we shall discuss later in thischapter.Whilst mixed reality games are an exciting prospect to create anengaging experience, we must also consider how the players will interactwith objects and places, both real and virtual, using their mobile phone.In particular we must think beyond the standard phone keypad andfour-way input controller and consider non-traditional input mechanismssuch as a camera or 3D motion sensors, and how best to incorporate usercontext through location and/or proximity.
This is the subject of the restof this chapter.6.2 CameraCameras are now a common feature of even the most basic mobile phoneand provide developers with an interesting opportunity for their usewithin games. However, as yet, there have been relatively few examplesof games that have done so, and they have used the camera in verydifferent ways.Other of the earliest of such games, unsurprisingly, came out of Japanaround 2003, such as Photo Battler from NEC.
This game allowed playersto turn photos into character cards that were assigned various attributes,enabling them to compete against each other. At around the same time4Flintham M., Smith K., Benford S., Capra M., Green J., Greenhalgh C., Wright M.,Adams M., Tandavanitj N., Row Farr J., and Lindt I., Day of the Figurines: A Slow NarrativeDriven Game for Mobile Phones Using Text Messaging, Proceedings of 5th InternationalWorkshop on Pervasive Games, Salzburg, Austria, June 11–11, 2007.5Peitz J., Saarenpää H., and Björk S. Insectopia: Exploring pervasive games throughtechnology already pervasively available.
In Proceedings of the International Conferenceon Advances in Computer Entertainment Technology, Salzburg, Austria, June 13–15, 2007.CAMERA179Shakariki Petto appeared from Panasonic, which took the form of a virtualpet that a player fed by taking pictures of colors that represented food, forinstance the color red represented apples. More recent games have alsoexplored using the pictures themselves to create mixed-reality games.The Manhattan Story Mash-Up6 used players on the streets of Manhattanwho were given words defined by other players, who were online, andhad to take a picture to represent the word. Other players then voted onthe most applicable picture, and the player who took that picture wasawarded points. A similar concept was explored in My Photos are MyBullets 7 although in this case, the object of the game was to take a pictureof a prescribed opponent and then let an independent ‘judge’ decide thenumber of points awarded, based on the quality of the image.Other games have evolved to use the camera to detect movements ofthe phone and transfer them to movements within the game.
Probablythe best known are from game developer Ojom (www.ojom.com) withits games Attack of the Killer Virus and Mosquitos for Nokia’s S60 smartphones. In both games, the enemy characters that the player must ‘shoot’are superimposed on top of a live video stream from the mobile phone’scamera. The player moves around this mixed reality space by movinghis phone and firing using the centre key of the joy pad.
Although thistechnique sounds complex, it is a fairly straightforward piece of signal processing, whereby captured images are generally grid sampled, to reducethe complexity, and then some form of block matching algorithm is usedon successive images to estimate direction of motion. However, it shouldbe noted that the granularity of control is large, the technique is affectedby camera quality and lighting levels, and it is extremely power hungry.Some games have evolved to use visual codes to either detect movement, such as an augmented reality version of table tennis,8 which isbased on an implementation of the Augmented Reality Toolkit for Symbian OS. The system interacts with code markers fixed on a physicalobject to calculate the rotational vectors it applies to a pre-renderedimage.
Figure 6.3 shows an example of such a system, but for adverts notgames, which improves upon previous versions in that the 3D object isstored on the specially designed tag and rendered on the fly. However,6Tuulos V., Scheible J., and Nyhom H., Combining Web, Mobile Phones and Public Displays in Large-Scale: Manhattan Story Mashup, Fifth International Conference onPervasive Computing, Toronto, Ontario, Canada,13–16 May 2007.7Suomela R., and Koivisto A., My photos are my bullets – Using camera as the primarymeans of player-to-player interaction in a mobile multiplayer game, International Conference of Entertainment Computing 2006, 20–22, September 2006, Cambridge, UK, pp250–261.8Henrysson A., Billinghurst, M., and Ollila M., Face to Face Collaborative AR on MobilePhones.
In Proceedings of the International Symposium on Mixed and Augmented Reality(ISMAR 2005), October 5th–8th, 2005, Vienna, Austria.180EXPLOITING THE PHONE HARDWAREFigure 6.3Using cameras to capture motionall these systems suffer similar problems to those experienced by opticalflow techniques.Other games have used various forms of a two-dimensional barcode; the most famous example being ConQwest by Area Code, www.playareacode.com, which used Semacodes. ConQwest was a teambased treasure hunt game using implied positioning from Semacodestickers and was sponsored by Qwest Wireless in the USA to promote itscamera phones. Each sticker was given a relative value, and the playerscollected the stickers by taking pictures with their camera phones.
Thefirst team to collect $5000 worth of Semacodes won. The game wasplayed by teams, generally drawn from local high schools in various partsof the USA.1 More information about the use of bar codes in mobilegames can be found in section 6.3.6.6.2.1 Using the Camera on Symbian OSWhen writing native C++ games for Symbian OS platforms, use of thecamera requires the UserEnvironment platform security capability.The CCamera API is used, and one of the two available callbackinterfaces, MCameraObserver or MCameraObserver2, should beimplemented.
The primary difference between the interfaces is in theway the camera is managed when several applications use it simultaneously. MCameraObserver2 introduces a priority level registered foreach application which determines the ownership of the camera hardwareCAMERA181at an instant of time. However, we will use MCameraObserver in theexample given below.Symbian OS provides an abstract onboard camera API, called ECam,which is implemented by handset manufacturers according to thecapabilities of each phone. ECam provides a hardware agnostic interfacefor applications to communicate with, and control, any onboard camera hardware, via the implementation of the CCamera API provided bythe manufacturer. Recent releases provide enhanced control of differentcamera settings, given they are supported by the hardware of the camera,such as introducing camera stabilization functions, ISO rate controller,and various focus and aperture setting options.
These are implementedin class CCameraAdvancedSettings which is defined inside classCCamera. As we shall only deal with the basic components of the camera API of Symbian OS v9.1 in this chapter, we refer readers to therelevant platform SDKs for a detailed discussion on using the advancedfeatures.Below, we provide an example using MCameraObserver which iscalled ColourTracker. It enables a Nokia S60 3rd Edition smartphone, toidentify a specific color in a scene and then track it, as shown in Figure 6.4.While this example merely provides the tracking functionality, one couldeasily imagine it being used for identifying targets in a mixed realityshooting game where players use their camera phones to detect targetsby their colors.Figure 6.4 Screen shots of ColourTracker.
The left image shows a red London RouteMaster bus before processing. The right image is after processing, where the red color hasbeen identified, tracked and replaced with white182EXPLOITING THE PHONE HARDWARECamera API OverviewLibrary to link againstecam.libHeader to includeecam.hRequired platform security capabilitiesLocalServices,UserEnviromentClasses to implementMCameraObserverClasses usedCCameraCCamera provides several asynchronous functions that enable applications to control the cameras on mobile phones. These functions areasynchronous to prevent the application thread from being blocked whilewaiting for code to finish executing, such as when capturing a cameraframe.The initial step in using the camera in Symbian OS is to construct it,which is done by calling the static factory function CCamera::NewL().This may result in a panic with one of the following error codes generated:• KErrNotSupported, if no camera is installed on the phone or if anincorrect camera index is passed as a parameter (for example passing1, which indicates the front camera, when the phone has only a backcamera, which is index 0)• KErrPermissionDenied, if the UserEnvironment capability isnot listed in the application’s MMP file• KErrNoMemory, if insufficient memory is available for allocating theCCamera object.Not all Symbian OS phones come with cameras built-in, so it isessential to handle the case where camera-based applications, such asColourTracker, are installed on phones without a camera.
For this reasonit is good practice to always check the availability of the camera bycalling CCamera::CamerasAvailable() before implementing anycamera-dependent code. The function returns the number of camerasinstalled on the phone and returns zero if no camera is present.Assuming that there is at least one camera on the smartphone, thenext step is to reserve the camera to grant the client application a handleto it. To perform this task, the asynchronous CCamera::Reserve()function must be called on the CCamera instance.
Once the camerais reserved, it has to be switched on by calling another asynchronousfunction CCamera::PowerOn(). The camera then will be ready toCAMERA183receive data from the camera viewfinder in bitmap buffers via either ofthe activation functions, CCamera::StartViewFinderBitmapsL()or CCamera::StartViewFinderDirectL().Of the two functions, CCamera::StartViewFinderDirectL()is the faster, as it transfers the viewfinder buffer from the camerato the screen directly using direct screen access (DSA), while CCamera::StartViewFinderBitmapsL() does the drawing through thewindow server (WSERV).