Android Builders Summit 2013 - Camera 2.0: The New Camera Hardware Interface in Android 4.2

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
good afternoon and welcome to camera 2.0 so um before we get started i just wanted to get a feel from the audience uh how many people here work with the camera apis how many people have tried to bring up a new image sensor on hardware okay and so what are you looking for what any particular interest somebody who's never dealt with cameras at all anybody totally knew excellent so we'll try to make it so we can um cover give you a background and then go deep dive into uh the camera 2.0 which is the new hardware interface layer so before we start welcome once again i'm balvinderkar and i work with aptina i've been there about a year and a half been doing android since 2009 i was at t-mobile before that um i have a lot of background in mobile software and apps but not so much as camera until i came to optina and i used to really think that oh it's this tiny little thing and it takes a picture but um i'm quite amazed at everything that goes into it and how complex it is um and we have so aptina builds image sensors we have our image coprocessor and we also have our android camera stack and some of the things we'll talking are experiences based on bringing up our own sensors and i have ashutosh who's flown in from india just to be here with us today uh hello good afternoon everybody welcome again for this talk i am ashitosh working with so my i'm working with aptina and android for the last three years and prior to that i was at samsung electronics and i worked on a range of multimedia devices over there so yes we'll be talking about camera 2.0 thanks okay so our agenda today we'll generally talk about the camera use cases since it's an embedded android summit we're not going to focus too much on the api part of it and the sdk part of it but we'll go briefly over what the different classes are and what's available to an application developer uh focus a little bit about on the camera surveys now with uh jelly bean 4.2 the camera service got re-architected in a big way there are still no implications at the application layer or the apis that are available but we will talk about what the new architecture is after that um i'll hand it over to ashitosh and he's going to cover everything that's below the hal so what's the new interface the architecture of the camera 2 hal what does it look like device drivers and then we'll talk about some of the challenges that typically whenever we bring up a new image sensor on a new platform are things we run into lastly we'll talk about some of the emerging trends uh in this industry and uh where we think uh it's going the future is going to be uh hopefully we'll have time left for q a at the end so what are the prominent use cases very simply once you have a camera and a device you need to be able to do a live preview so whatever you see a viewfinder basically sometimes not only do you need the view you also need to be able to provide that typically goes right from the image sensor to the display subsystem it doesn't go back up to the application developer but sometimes developers want to do things with it so a copy of the preview frame back up to the application developer then we have the ability to capture a frame that's what the most common use case of an image sensor is i'll talk about a little bit about the emerging field of computer embedded computer vision where image sensors are not necessarily used for getting images but also as a means of getting information especially for the context aware phones finally we have video recording of a camera stream then there are other secondary use cases basically at this point everything think of a point-and-shoot camera or a little bit more controls what are the different scenes available can i set the scene is it a sports scene is it a night scene can i um put any filters on it the new feature that came out with ice cream sandwich was the ability to take a snapshot during video recording finally we have the different even callbacks okay the shutter was clicked the if the application developer wants to do something special at that time they should be able to do it focus was achieved um so these are the different callbacks that the camera subsystem provides to the application developer and then finally are the information related use cases as of today there are very few classes and very few uh very little metadata that is provided to the application developer the most common is the face class which will tell you how many faces were detected in the scene uh what is the confidence level that it is a face what is the what is the location of the eyes uh what is the location of the smile so metadata of an image but like i said it's very uh limited with 4.2 there's a lot of plumbing that has been done and a lot of emphasis on being able to provide metadata right up to the application layer there are limitations with this camera api so if you have any of the android phones the htc samsung you'll see that their camera applications can do a lot of features which there are no apis for it for example burst mode photography which is very common um you could have continuous burst where you start the shutter button and it keeps taking pictures till you release it or you can have a time you take a shutter and takes three bursts in succession there's no support for panoramic shots um then there's no frame data like if there's no frame metadata that is available if i want to know that there's a frame a frame was taken but what was the exposure with it what was the focus in it there's none of that information is available right now there's no par frame control of the camera so to back this up a little bit this whole thing about camera 2.0 there's a lot of research that was done at stanford and you can search for f-cam or camera 2.0 and there they describe a lot of the use cases of what they would like to do with the camera um for example hdr which is high dynamic range so many times i'm sure you've taken pictures where something is in the shade and it's unclear and then there's it so there's sunlight and us something under a tree and that's not clear so high dynamic range what they do is they take multiple shots at different exposures and then they stitch them together same thing with flash and no flash you could have a scene for example a person is sitting in the dark and there's lights outside you want the flash to be on the person but you also want those lights at the back so the ability to combine flash and no flash together there are multiple range of applications that can only be achieved with per frame control see for this frame i want the exposure to be this i want the gain to be this then there are other things um whatever you get back from the camera subsystem is three callbacks you can get what is called post view callback there's a jpeg callback and a raw callback i haven't still seen any device where you can actually get raw callback i could be mistaken but i haven't seen one so it's not very common jpeg is one the one that is common but you can do a lot of things if you get the raw um information from an image sensor so all of these things are still missing so now just to give a little bit of overview of what apis we have currently there are six classes out of this the most interesting ones are camera and camera dot parameters there are eight callback interfaces and the one at the bottom is in yellow because it was the latest one introduced with 4.2 the rest are all 4.1 and previous so out of this camera gives the capability to open a camera close the camera have access to the camera controls you set up a preview and of course you can take you take a picture the camera parameters class is a huge class with a lot of different um api methods so to help understand with this there are three categories of api most of the apis will fall in some of them is the mandatory feature set which is have which are typically named like get supported preview size get preview supported preview formats they have the supported that means every camera on android subsystem needs to provide certain mandatory features there are gets and sets for those then there is the optional feature set things like is video stabilization available now just because there are apis available doesn't necessarily mean that the whole stack is functional you could have hardware that doesn't support it you could have um you could have hardware that supports it but maybe you haven't done the correct plumbing in the camera hal and it's not available to the end user so these are runtime calls and typically an application developer would do at runtime and say is this feature supported and if it is then you can go and enable whatever you want to do from the ui perspective finally there's also a dump pipe that is available and we use this a lot internally to provide our own features for example i mentioned that burst is not available but we have our own burst extension um there are other things that are not available which we provide and so we use this dumb pipe and basically it's just providing string parameters um it's you can query the system for what string is available but it's very oem dependent somebody really has to know that the implementation of the camera hal it will not be a generic solution and then the rest of the camera classes which i already mentioned uh briefly earlier now moving on to the camera two internals these got released as um so one of the first big disclaimers i would like as we move into this section is that the only documentation that we have is actually reading all the android open source uh code and picking up things from uh trying to figure out what the design is trying to look at the what the method names are the comments that are there so one it's uh so you have to there's no document which kind of defines all of this so that's one thing to be cautious of second is right now there is no apis available at the sdk so the next version of android could very well go and change some of these so just something to be cautious of in case you're going to go back and refer to this this part now this was when i first started when 4.2 got released and i started looking at it i saw all these references to camera two there was a camera two dot age there was a lot of classes and then i started wondering which application was using it but turns out there is no application that is using these features as of now not even the closed source photo sphere that was released um the google app that was released with and uh 4.2 the hal implementation that came that was open source was the samsung synos 5 based and we'll talk about it later during the talk and um this reference to android.hardware.pro camera actually came from one of the comments there which kind of indicated that that's what they're trying to do with camera two it's provide very fine grained control so very briefly i'll go over i'm sure everybody here in this room has seen this cake diagram so the wheel i'll be focusing on the camera hal and where it sits so it's that little piece in the hardware abstraction layer and it comes with its definition and we we and all other oems provide their own implementation based on the application processor that's the camera subsystem the top part as you can see is hardware independent the bottom is very closely very closely related to the hardware and the hardware in which this case mostly involves the application processor the image uh processor coprocessor and the camera hardware or the image sensor itself so just the process view the camera service resides within the media service whenever there's a new application it requests it makes a request to access a camera over the binder interface and if it has all the permissions then the camera service will grant it permission and it does this by creating a camera hardware object which then makes system calls into the kernel to actually uh communicate with the hardware there's always also a communication path from the image sensor to the surface flinger or the display system for preview purposes if there is a second application and it wants access to another camera object that is typically granted however at a given time only one application one application can have access to one camera and a given camera can be accessed by only one so as a good from the application development perspective to be a good citizen whenever you're whenever you get into the past state you should release all your handles to the camera inside the camera application the only thing that's interesting here is the uh what's below the jni and um this basically uh the thing that hasn't changed between camera one and camera two is that any frame any uh frame information there is always a copy made from the native to the application space and i think it's almost like a security thing here you wouldn't want for ex for one of the um use cases is a burst right let's say we have a 8 meg sensor and we want to take two seconds of burst now this becomes a problem then right because if for eight eight meg picture let's say you compress it and you have like a two meg jpeg you have to make copies if you want to return it to the application and if it's running at 30 frames a second it's a huge problem there's a lot of memory copies doing aware so this again highlights one of the limitations that are there i still haven't seen within the re-architecture or solution to this problem but maybe with the next version of android something does come up it also holds references to different objects and the callbacks are made using this the jni layer so now moving on to camera service so everything in the middle are the eye binder interfaces the camera service is the lib cameraservice.socamera.edu is the hardware interface which whoever's providing the camera hal will implement now this changes with 4.2 what we have here is that the lib camera service can now talk to two different interfaces camera.h and camera2.h and a given implement a given device can have implementations of camera edge or camera 2.h now there is some glue classes that were discovered which is camera 2 client and camera 2 device which basically enables this lip camera service to talk both to camera.h and camera2.h now let's try to understand a little bit more about what what is this difference between the two so this one i think i've covered it's basically what does the camera service do so one of the basic differences that happened between camera one and camera two was that in camera one everything was a function based uh the whole view of the image sensor and the camera subsystem was function based take a picture get get the preview set mode and things like this that has changed from an entirely streaming um perspective so there are different streams that are available at any given time there's the preview stream there's a capture stream there's a callback stream the zsl stands for zero shutter lag there's a recording stream and it's all stream based so there's a steady steam stream of information that is available from the sensor so it has so it becomes more dynamic you can insert different things you can access different things you can reprocess them the camera to not client it sits on top of the hal and it has different processors that run these are the different processes they talk to different streams and ashutosh will go over this in more detail when we start talking about the camera hal implementation the other thing is there's a big focus on metadata now metadata of two kinds static metadata what are the capabilities of this camera you don't have to open the you don't have to open an instance of the camera to actually get that information uh it can be uh the system can be queried um statically the other is there is there is now an attempt to minimize copies because see out of all the sensors that are available on a device compared to the accelerometer or the gyrometer the most memory intensive sensor is the camera and so there's an attempt to minimize the copies between within the native subsystem the one that happens from native to application for the dalvik virtual machine that still exists but within it there is an attempt to minimize the number of memory copies metadata there's two kinds of metadata static and frame based uh the ones where and if you open up the file i have i have all the links at the end and a couple of slides it has the path to where you can open up the file and actually see it has a whole list of tags so the ones that are marked with underscore info are the static ones the ones that don't have it there's typically you'll find like android flash and android flash info the others are frame based so this frame when it was taken what was the flash setting and it comes bundled back with the frame information and um for all the oems in the room there is provision to provide vendor-specific tags so if you want to do something to differentiate your stack this is available so earlier just like the camera parameters pipe was available is available to do settings to control the image sensor the vendor-specific tags is again is another provision a dumb provision to be able to actually provide metadata back to the application layer so now it's become like a two-way street instead of one way these are all the directories the only thing i have is the av highlighted in yellow which is where the paths change from ice cream sandwich to jelly bean and the metadata that i was mentioning is the last line on this slide so are there any questions sure uh i will be emailing them right after the yes sure and feel free to get my contact information and we can communicate if there's any other information you would like in the future so that i'd like to hand it over to ashtosh thanks hello hi good afternoon everybody so i will be talking about the camera hl in particular and all the layers that are underneath it so basically it is it comprises of camera drivers different configuration for the drivers and then we will see some of the challenges that you will face while building up your hand so this is the typical camera stack in the android framework you must have seen it so many times so basically this block is what is known as the camera hl hardware expression layer so every vendor has their own implementation of the of this abstraction layer which basically depends upon the underlying hardware the kind of ss which they are using and the kind of camera hardware they are using okay so we will be first talking about the changes which is undergoing from ics to jb and will move then we'll move to the camera driver yeah so this is the typical functionality of the camera hl the camera hardware abstraction layer it basically is very specific as i have already told you it is very specific to the camera hardware platform and basically implemented by the vendors so every vendor has their own proprietary hl which provides it basically maps the service calls which is mandatory by the google to the driver functions it will get the functionality out of the driver so uh ice cream sandwich uses camera dot h and jelly bean and above probably the next verses of jelly bean will use camera 2. and the camera hal also talks to the camera driver so basically there can be multiple flavors of driver available some of the most popular ones are v4 l2 and openmax and on top of that vendors can have their own proprietary implementations of the driver just to just in case they don't want to expose the functionality excuse me it communicates with the driver through the file i o calls so this is standard linux io calls that they use to talk to the uh to talk to the camera device so this is the functional camera hl diagram that's that's how the camera hell used to look like till ics and the previous versions so if you see that the the major functionalities of the camera hl is to manage the memory it should know what kind of memory it is dealing with what kind of uh what kind of memory the camera hardware needs from it then it needs to manage the display surface in a way that the ultimate consumer for the camera buffers are the display surfaces so it has to manage the equilibrium between the display and the camera it basically needs to respond the events that it is getting from the driver and it also need to generate the event for the application layer and ultimately it has to manage the camera so basically this is the camera manager which talks to the camera driver and it yeah this is how it looks into the jelly this is how it looks in jelly bean so if you have seen as my uh as balvin has already talked about it previously the hl was more function based so the the aim was to get a certain kind of stream if you want to get a preview you say start preview if you want to take an image capture you do you you do take picture and all these things but now going forward the jelly bean onwards they are not viewing it as a piece of camera they are viewing it as a as a streams so basically the the stream can be of multiple types there can be preview stream capture stream and capture stream post to stream metadata stream and potentially all of them running together at the same time okay metadata as uh she has already mentioned in her uh during representation that nowadays focus is much more on metadata it is the information extra information associated with the image which basically typically opens up the new horizon for creating some cool apps so some of the typical metadatas are like if you have face information if you have some interest point information that you want to give out give it out and it can be stream specific or it can be camera specific which can differ the third thing which i think they have introduced is the reprocessing of the stream so whatever the initial to block which we talked about deals with the live camera stream which is coming out of the sensor here in the case of reprocess stream reprocess stream we are talking about the stream that is in your memory that you just want to give it out for the reprocessing to do some other processing actually okay and the stream manager is the guy who will talk to the camera driver here so basically stream manager job is to manage the buffers what kind of memory it requires and how it will get the get the frames out of the camera driver so this is the guy who will be responsible of talking to the camera driver so this is just just to understand the functional and just to see what who does what so camera hl basically initializes all the different blocks which we have talked about the stream manager and all of the guys it basically dispatches all the function calls which it gets from the camera service to the respective blocks then the stream manager is the one who who basically handles the streaming events who gets its own buffers who manages its own memory and it talks to the camera hardware and also manages the state machine right from stream on to the stream off then metadata handler the job of the metadata handler is acquired per short metadata get some interest points all other things that is required and convert them to the android format and plumb it back and give it to the application layer reprocess stream manager basically it sets up and manages the reprocess streams so this is pretty much about the functionality changes uh what are the changes that has uh taken place since camera hl 1.0 so if you see most of the camera hl 1.2 functionality has been moved to the service lib camera service you see the the camera to device and all these things have been moved to the camera service image metadata has got more importance so basically there is a different handler that is created specifically to get the metadata to to open up the new horizons to create multiple applications reprocessing introduced process and already captured image stream then as i have told you before this new hl is based on the streaming it basically sees this it basically generalize the stream rather than on the function basis so as so the camera hl1 used to have start preview take picture kind of functions whereas in the camera hl 2.0 you will see that okay allocate stream start streaming and stop streaming kind of the functions and every stream can i mean one of the stream can be preview stream capture stream and some of them or all of them may be maybe running together okay so this is again the same uh diagram so this is about the camera here changes what all the changes have been taking place from the camera hl 1.2.0 now i will be talking about the driver some of the some of the popular implementations of the driver and then we'll see the challenges what's the driver what what the driver offers to the upper layer basically so as you see that there are multiple companies that are involved in this camera hardware creating that image sensor and all other things so basically the camera driver represents a standardized interface for the for the hl and the above layers to access the camera hardware that the imager specific or camera hardware specific attributes are handled at the lower level of the driver so the driver has two parts one is very generalized and this basically uh basically exposed api to the to the top layer and there's one which is very very specific to the image hardware that's takes care of the actual camera hardware so right now there are there are multiple types of sensors available some of them are raw sensors which will give out the bare image which needs to be processed through the isp now this isp our isp model is very this thing isps are offered by oems the platform vendors or there are some smart sensors which have the isp on the chip so you get all the processed image and you don't require the platform isp which is available with the with the application processor so this cut this difference is basically handled at the driver level whether you want to bypass the isp or you want to use the isp for processing the bare image and get the youv image or something out of your system so android video for linux 2 is used in many implementations it has been in existence for many years and i mean recently it is going and it is undergoing some changes to accommodate the new interest and to accommodate the finer control over the hardware blocks as the hardware is getting more complex openmax is the other one which is getting popular which is getting popular and it also is being used to control the camera hardware this is the v4l2 kernel level the block diagram so if you see what it offers to the to the top layer is basically a generalized way to access the device i suppose i octal dispatch and this is this is the the controlling interface for the v4l2 it basically controls the buffer management and it also controls the camera hardware right so when we talk about the buffer management control so what it does is it depends upon the camera hardware requirement it either it either allocates the memory what is the physically continuous memory if that is required by by your camera it manages the buffer and so basically the buffer so there are there you create a buffer pool and you will be reusing the pools you will fill the buffer then you will give it to the hl for processing once hl is done with that then hl can queue cue it back to your to your driver so we call this as a qdq mechanism so it maintains that it maintains the various states of the buffer buffer needs to go to multiple stages before it is ready for the consumption apart from that it also manages the hardware camera hardware so basically it it has the infrastructure to do the device discovery has been done for the most of the linux devices device initialization then it has the uh it talks to the device through the i2c to get device specific parameters and if you want to do some specialized settings for some specialized resistors then it also takes you the power management so basically when when you switch off it it switch off the power it basically takes the sensor into the low power states uh some of them standby modes or complete power of state based on your design then it also enable and disable image streaming which helps you to get the streaming out of the camera some of the resources some of the important resources which v4 l2 has on any camera driver for that matter has so basically memory as i have told you memory can either be located at the driver level if you are need if you are in the need of physical continuous memory or there are some of the intelligent hardware that are that are present now which can do the which basically avoids you to use the physically continuous memory because it's very expensive so it depends upon the kind of hardware that you are using then it needs to support for the interrupts it needs to handle interrupts such as frame and interrupt you know and and autofocus interrupt or frame start or frame finish there is there are a number of interrupts you can choose what interrupts you want to service okay some of them which is very essential is frame finish where your frame is ready for the consumption and your focus events whether in what stage of focusing you are at and whether you are completed the focus or not camera hardware control so normally the camera hardware is connected on the peripheral busses such as i square c and spi so i square c i squared is the most popular one still it is being used to control the maximum number of camera hardwares spi is the faster alternative to it uh and there are gpios again gpios are basically used to connect the reset pins and all other standby interfaces for the camera hardware then it manages the sensor power sensor sensor power as i have told you that right now power management and peripherals powers is of utmost importance in the any any handle devices so basically it supports it based on the device uses it puts the device into the low power mode and ultra low power modes i mean whenever it is desired yeah okay this is this talks about more about the buffer management so in the linux on the v4l2 one or more buffers are supported as i have told you the buffers either can be allocated in the driver space or based on the hardware you can use the buffers that are located in the user space also buffers are queued in the circular list so you will be using the buffers so whenever a buffer whenever so basically the starting of the streaming and ending of the streaming are the ones which basically uh starts the process of filling the buffer so when you call stream on the buffers which is at the top of the circular queue is being taken out and is being filled with the camera data and camera hl once it once it is filled camera chill dequeues it it process the buffer and it queues the buffer back again to the driver so the buffer becomes available again for the for for acquiring the data and suppose if you are done with your imaging when you want to switch off the camera you just need to call the stream of command it basically stops the streaming and it it releases all the buffers that that it is handling right now so this is just the call sequence of what are the calls that you have to follow in the v4l2 framework to get the preview out some of them are mandatory some of them are optional so i guess let's just concentrate on some of the ones which is important so first thing is set format set format so here you set the image format and the size here you say that okay what kind of color format that you are dealing with and what will be the frame sizes and i guess cropping if you want to set the cropping if you want to crop the image then you just set the cropper parameters or it is optional you can just leave it out then you do the request buffers you need to tell the v4l2 how many buffers you will be using and v4 l2 you have to set what kind of video memory is available to you to the to the driver based on it it will it will say whether your request can be completed or not so suppose uh so request buffer you have to do this is mandatory so when you are suppose the buffers are allocated in the kernel space in the driver then you want to get the buffer attributes into the user space for your programming then you need to do query buff you so qriver will return you the buffer characteristics and since the buffers are located in the kernel you definitely want to map them to be able to use them in the user space so that's why you do a map here and finally once you get all the buffer details and everything you need to cue them explicitly to the driver so you say video buff video icq buff and once you have queued all of them then you want you will start you will do stream on finally stream on will basically start the streaming process it will enable the rx it will immediately enable the rx actually once you are done with the streaming once you are done with the camera you just gonna say stream off it will just stream it will stop the streaming and it will delocate all the buffers which is okay so as i have told you that uh so the this v4l2 framework is quite old uh it has basically it is i guess it is eight to nine years old and initially when it was built the camera was only supposed to do preview and some of the captures but today camera is is basically you know camera is supposed to do much more than just just viewfinder and just image capturing and moreover the imaging hardware the isps are getting more and more smarter there are more and more ips are getting added to it and to get the maximum out of the camera the users are demanding finer control because there are multiple paths that can be available and you can you can decide which paths you want your data to pass through for processing to get the desired use case uh done so keeping this in mind i think they are they are the before ld community basically they are coming up with this media they have come up with this media control architecture as so it is designed to support dynamically reconnected reconnectible hardware blocks connection should be there in the hardware but so there can be a situation where one block can get input from multiple sources and can output to multiple sources so it will help you in creating your own pipeline with a particular source and with a particular sync it allows for greater programmer control it introduces the lotion of entities pads and links uh so basically your hardware block is the entity here and if whatever you are seeing is the pad so pads are the way are the way through which the entity talks to the outer word okay so there can be input pads and there can be output pads based on the devices it can be get connected to or it is connected to in the hardware you have those many number of pads but in the given pipeline you have only one input and one output active okay so this is just a uh example what i have taken and it has been taken from omap3 isp uh which is pretty similar simple one so if you see that this is the sensor mt9 p031 sensor and the the sensor is being is being uh is being enumerated as an entity and it has got a source pad attached to it the ccdc is the parallel interface parallel imaging interface of omap3 io of omap3 so the sensor is connected to the omap3 through the parallel interface so if you see that the sensor has the source pad and the ccdc has the sync pad right so the data that is coming out of the sensor is going to the ccdc right now there is a choice you can make right so suppose if you don't want to do any processing if you don't want to do anything then you can control you can you can program your pipeline for the data to flow from the sensor to the cc ccdc and this is a video zero node that is nothing but your memory so directly to the memory okay this is one of the one of the configurations that you can program now suppose if you want to do resizing or you want to do other stuff there are a lot of stuff i have just taken one of one resizer block as an example suppose if you want to do if you want to resize also then you can choose this alternate path where ccdc this thing will be will act as a source pad it will feed to the resizer and the resizer output definitely will finally get into the memory so like this uh you there there is a term known as entity graph okay so you user will have the entity graphs where where you can have the the connections of the blocks possible based on the use cases so for the bus mode photography you want one way for the other shots you want other way so it it all depends upon the use cases but it gives the final control yeah so uh when you bring up the camera head or when you write your own camera health some of the things which you need to take care of which you need to take care let's just talk about that i think the first and foremost is the memory management since the camera hl is the one who will be talking to the driver and will be basically you know basically getting the buffers and you know giving it to the display for the consumption so it should be aware of the kind of memory the camera requires and the display requires and suppose if there is no match then it needs to do copy or something like that so it so camera hl when you are writing the camera here you should be aware of what kind of memory your camera hardware requires and what kind of memory your platform requires okay various implementations of camera driver so as we have already discussed before l2 is one of the implementation by which you can you can have your camera driver implemented openmax is another one and some of the vendors have their own proprietary ones and this big gains and importance because in the the scenario of multiple cameras front and back one of them can get implemented by referral to another one you can use the proprietary one or you can use the open max one so camera hl needs to be aware of this color format conversion there can be multiple color format the sensor can give out different color format display may expect another one so camera hl may need to do the color conversion either using the hardware block or the software block buffer synchronization as i have told you the ultimate consumer of the camera buffers is display surface and so normally the buffers are shared just to save the memory so camera hl needs to manage the buffers in a synchronized manner to avoid overrun and under run of the buffers then support for advanced features the basic camera hl with what the android offers will support only the bare minimum features if you want to have some kind of differentiating features then you got to support it in your camera either by the android extension or you need to implement the whole interface so yeah so are all camera hl sql the answer is no and in what way it differs so basically supported feature depend on the hardware capabilities i mean the lot of functional that can be enabled only if your hardware supports it then also the way you implement it in the camera health there may need there may be a need to integrate third party ips to get the functionality done so it all depends how you program it and how you use it in your camera and performance again reliability and finally how ext how easily you can add extensions to the android feature set to get your other features done i think with this uh i think that's all i have to talk about and probably one minute we'll talk about some of the latest trends uh in the camera thing and and probably then we'll take some questions okay yeah i guess we're running out of time yeah so um we have a little bit of time but uh i did we did cover most we did uh cover most of um uh what we're thinking so some of the things computer vision applications we're finding a lot of interest object tracking gesture recognition augmented reality computational photography where people can do things i mentioned hdr flash no flash different hyper focus lots of different things so basically the output there will be even better image quality on devices 3d imaging is another thing that sometimes gains a lot of interest and wanes i've seen things there but where you use multiple um cameras to create a 3d image and yes so i did mention this this was something that i found this morning where the uh vp of uh google vic gandotra mentioned on google plus last night that beware of the next the next nexus will have a very uh insanely great camera just you wait and see but what does it mean um i guess we'll just wait and see because that's about as much information as i have and we're ready to take any questions that you may have
Info
Channel: The Linux Foundation
Views: 6,989
Rating: 5 out of 5
Keywords: Embedded Linux Conference, The Linux Foundation, Linux Events, Tech Events, Open Source Events, Linux conferences, Tech conferences, Open source conferences, Android events, Android conferences, Android (operating System), Camera, Mobile
Id: Lald5txnoHw
Channel Id: undefined
Length: 47min 10sec (2830 seconds)
Published: Thu Mar 07 2013
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.