Cameras in Embedded Systems: Device Tree and ACPI View

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay let's start my name is Sakura I work for Intel in in Finland I'm happy to see so many so many people here so welcome everyone I'm going to talk you about cameras in embedded systems and in particular I'm concentrating the today to the firmware interface device 3 and ACPI aspect of the system typically typically we have an amended system with a camera it is this the system contains an image signal processor cam camera sensor and the camera sensor me our camera module and you have have a voice coil lens the the system in the diagram that the ISP they mean single processor is part of an SOC and in the camera module you have the sensor here and the lens voice coil and the lens itself sensor and the lens voice call devices are typically ICRC devices you could have other buses but most of them are nice Kersey both of these two devices was embedded systems often they are battery-powered there are there are resources that are used for powering the map there are regulators in this case there are three Ana village that are used for various various purposes in the camera module and then we have the clock signal clock signal here in this case in the example it comes from the ISP which is often often the case I don't know exactly what is the reason but it hardware engineers tend to attach the the clock output to the is P dimensional processor and then connected again to the camera module we have reset GPIO and and I screw C bus which is used as a control bus or both the lens voice call and the sensor and finally the the CSI Jebus some of these some of these resources may be shared between the sensor and the voice call lets it depends on the model construction what it what it what is actually free so there tend to be different kinds of solutions so there is no one particular implement kind of implementation that that would fit for all the use cases that they need to be is to be flexible going forward to continue the background of this the rough sensors they have very little processing logic in in the sensor its itself typically there is analog and digital gain but not much more than that there are two test images this is this is how white looks like so that each pixel each pixel has has a single color and if you want if you want that look that look that they look like a real image you have to interpolate there's a test image with very little processing applied and with very bad quality algorithms to convert it and it is also upside down as you see and it's about dimensional processors they convert the images that you saw above to do something that looked like this there is a image from the US Northwest Pacific Coast and a red berry I don't know still what it is and display of Linux and Symbian based devices in a Microsoft Office and and white that's that's how white looks like after it has been processed using JSP so if you look at the previous slide there's there's quite a big difference between the two as long as everything is alright in the SP that's how it looks like two different little different if it's not then about the couple of words about the software interfaces that we use in Linux for the user space video from Linux this is the vivre Linden Linux API for capturing images it supports video capture cards USB webcams and of course cameras in embedded systems the API is the same with the devices are different capabilities or the devices are very different the media controller it is a control interface for complex devices where you have an internal pipeline that a single beautiful bit of a Linux device note is not enough for provides you also pipeline discovery and configuration and device discovery so that if you have a complex device you probably start what you probably want to start by opening the the media control device to discover what kind of devices you have what kind of hardware part one you have that this is an example of a media graph this is from my actual actual mobile phone the Nokia Nokia n9 it's a pretty cool device but I'm using it as an example but because it's a relatively simple and the coalition the mainline kernel there we have sensor represented by these three sub devices and the IASB represented by the rest of the the green sub devices and the yellow boxes that are the referral to video nodes that represent the data interface the referral to subdivide interface is used for control and referral to interfaces used for data the the ISP is driven by a single driver as well as the sensor and these are entirely separate devices in the system to proceed to the next slide each device is probably separately the ISP is probably connected to PCI or amber or some other fast bus and then in the same pass for a different bus you would have the I square C controller and the sensor and the lens voice call so that there are interim separate devices that don't have really information on on the other device that probe time so how does the the medium device how is the media device initialized the consequence is to first initialize the media device then the rest of the component devices can be bounded at media device there's beef relativity but register this is just mostly internal to be prone to don't need to know much about it most in drivers and period device register our video register device is registering we're all to device note and then in the fourth step we seem to have people to device registers subdiv on sensor sub device and that is registering the sub device sub device or sub devices or the sensor and then that is preceded by the by the same work basically on the isp side the register these these devices in the media device and finally the the sub device device nodes are registered and the media device that is to register the media device node so all is well at this point except that this part is from in the in the media devices new media device drivers prop function and this part is done in the sensor drivers pro function and it needs to happen in in this order and that doesn't obviously work it actually did sort of in the old days when we had platform data but the buffer Nevada has had a bit of a bad reputation in the recent five or so years so how does this work without the platform data and we get to the next topic of the presentation which is the referral to a zinc finger this is a framework that facilitates registering sub devices basically gets around that limitation which we which we saw in the in the in the media device registration sequence that is historically been only and in the probe the virile to async works by violating the drivers to register callback functions for completing the registration after the problem so when all the devices have registered and and then they are in the V from tracing framework has done its job then that device should be ready to be used in the user space this is a sequence diagram so you can see this the text is unfortunately pretty small the first the device base calls the ISP drivers probe function is split in two slides the first slide is about the ISP the pro function runs here and then it proceeds to the ISP devices device 3 node 2 to parse the local local endpoint node device 3 crap and point node and based on that information it can obtain the remote endpoints and and and obtain obtain the reference to the sensors device 3 node and this information with this information it constructs the visual to async object for each basic sub device that it can pound as endpoint out in the device tree and then after it has a known as enumerated all the endpoint node and obtain this information you can register a notifier with the referral to a sync frame work and then the V from tracing framework checks the the async sub device list and sees whether there are any sub devices and because we haven't yet run the central drivers probe function which would have registered the central drivers sub device with Dyson framework we don't have any sub devices yet so we returned with an empty result and the ISP a synth modifier is added to the asic modifier list and that is the end of the ISP drivers probe function from the evaluation point of view so the registration will proceed later on and now let's see the sensor drivers probe again the probe function is called the central driver then in code to parse the endpoint properties the sensor driver will get you know some internal information it needs to configure the the CSI passes for instance and then the sensor driver notifies registers the sub device with the async framework and the ASIC framework then proceeds to to match the new registered information with existing information which contains now the ISP drivers basic modifier here and it finds a match and then calls the the notifier is bound callback to the ISP driver this tells the ISP driver that the the sub device that corresponds to the sensor DT node that it was aware of previously is now available and it is registered to the media device and once all the sub devices all dating sub devices have been registered like this the the notifier callback the complete call back of the notifier is called on the SP tracker and the registration can finish this may happen well after the problem depending on when the drivers are getting cloudy basically this solves this solve the problem and it has been it has been there for some time quite some time now a few years perhaps but there are not too many or too many drivers perhaps at least not the drivers that would use this yet proceeding to the next topic device three believe many of you know quite a lot about what about this topic but for the benefit of those who don't I'll give you a summary system system hardware description in a human readable format pretty nice and it originates from spark and open forever but primarily is being used when by embedded systems with Linux nowadays especially by arm but I think also by PowerPC spark and also x86 not many x86 implementations but there are some gives us three structure of nodes and the nodes can contain other nodes and they can contain properties that are key value pairs and before the kernel makes use of it the device tree compiler is used to compile it into a binary form the device tree specification is maintained by the device tree the tork syntax and some semantics but especially it it does not defined any individual bindings or for devices that would actually take Technol actually tell the software how to use a certain kind of device with device three so linux device three binding documentation is part of the Linux kernel source itself I'm not a VST developer and don't know much but based on the brief research I made on the topic it seems that the the FreeBSD developers up there to be using the same bindings as the Linux that leads to some extent which is good because then you can could use other operating systems it with the same device three binary diverse three craps we handled properties in the device three can be used to refer our node in the in the tree that comes from the device three device three support at port concept is an extension of the data in Linux it describes an external interface in an IP block or in a device such as an ISP so for is piece this part would be would be describing for instance a CSI to receiver that you can attach a central tree for instance and an endpoint describes an end of a connection in that port endpoints have their own properties is the bus configuration that is needed to configure the bus that is used with the other other device refer to by the piano this is an example of of that I like it because because it's a it's from actual actual device resource file and it's quite short this is the central part of it so here we have an this is refers to an I square C controller and under that we have a node this disc that describes the device Percy device and we have a comparable string for it a couple of things they're not very relevant no but then you have the port note that port note describes the or the corresponds to that a CSI TV transmitter in the sensor and we have one endpoint in it so there is one one connection in that port and in that endpoint node we have a remote endpoint property and the P handle reference in that property so this describes the the link to the CSI to receiver elsewhere then the corresponding ISP part we have ports note here because the the ISP has several receivers as well this is just one of them so the port receiver number two is here and we have an have a single endpoint again in in that part that we call CSI to a a P this is a name that that refers on the previous slide and here so in that part note we also have an endpoint node and in the end although there is a similar remote endpoint so there is a link from both ways I think there has been some discussion long time ago that there are do better this would better it would be enough to have only a single remote endpoint and then the kernel would generate a backlink but doesn't seem to have happened oh this perks well I guess that's why well it hasn't been changed about the craft of a graph API it allows the drivers and the frameworks the parts support and endpoint nodes under the device heating device node without without knowing all the possible details related to them not for instance you can enumerate the nodes and obtain the remote endpoint based on the the P handle value and then we get to next topic which is the a CPI CPI is the advanced configuration and power interface that's the abbreviation this is principle it is very similar to device three in many many aspects but there are a couple of differences I think that the origins of of both a CPI device 3 are in the same apart around in the same time mid 1990s or or such the difference there is if one of the difference is with the CPI and the device 3 is that operating I supply is operating system independent whereas the device 3 binaries at least historically have been very operating system dependent and that the origins of origins of the a CPI are in x86 and PC so and that's that's where they primarily are being used nowadays but increasingly they're also being used in embedded systems the differences between regular PCs and billing systems is mostly that neighboring systems the there is lot more variation between the hardware and you usually cannot probe all of the devices in the system are ice cores the devices that you simply cannot probe so you have to know about them and the information in this case would be provided by a CPI one of the other differences between 85 and verse 3 is that the API also provides our management in the device tree based systems mostly I believe the drivers are responsible for implementing that but in a CPI it comes from the API and that is done through a CPI methods so those are actually runnable code and then there is a virtual machine in the kernel that that runs them the specification is developed by UEFI forum and looking at the history seems that there is one really Easter year they supply specification itself contents are supposed to contain all the information that is needed to create a CPI tables to describe the hardware what do you do if you get the random device that doesn't fit to any of the classes of devices that are supported in the specification do you try to get it to the next expensive a specification there is a solution to that as well that problem as well and it's a device specific data the device specific data object type was added to the a CPI 5.1 I think this is about three or four years ago there are two extensions to the device specific data currently one of them is property extension that gives you a value pairs and then there is a hierarchical data extension that gives you three structure that again and contain other hierarchical nodes or they don't can contain properties so together this looks like very similar functionality that the device 3 provides current but with a CPI then the next obvious question if you can use if you can use this code of constructs on a CPI 30 document if a CPI is still supposed to be operating system independent put field probably a bit awkward to ask developers of all the other operating system code to go to the looms current source code and look for a device 3 documentation to refer to and try to get patches in if they need something new but there is a DST property registry which is intended to be a lightweight approach where it's registering deep DSD properties but the scope the scope of this is at least missed as far as I understand quite a lot more limited than the scope of the device 3 bindings that we have in kernel at the moment so I think history co-leader probably has been no need to do to how to use too many properties in the a supply because if you use it in a PC a CPI by itself already provides what you what you need and and there is little used for purposes or hierarchical data extensions but but this is this is one of the use cases where they would be would be needed in in order to provide the information to the 0 software about the camera related device devices in a CPI systems then something worth mentioning also is the further an old property API it provides access to the properties independently of the underlying further implementation there are device three there is device three support and a CPA support so you can access the properties independently of what you have underneath and the drivers don't need really need to know and that makes the use of the DST property extension on a CPI and then this is up to now what we have is is basically in the in the main moniker and also it's all nice and and working but then looking at what we what we would need in the in the future is what we are going to see next turn an old crap API well this is currently at the level of an RFC Patchett and it is being currently debated on on the Linux a CPI mailing list provides very similar functionality to and then the o of graph API being able to enumerate the ports and endpoints on the device three based systems but being firm very independent so you could use this on a CPI as well the device three implementation of this terminal graph API directly uses the device three functions viously and the acp implementations makes both use of the DSD hierarchical data extension and the property extension and moving at V for all to you what's also known RFC level is different to further node API so this is the V from to support for embedded systems based on a CPI basically and it depends on being able to use referral to you elevated systems with cameras in a CPI you need both the firmer node graph API and referral to you further notice by it's all things that you are a dependent and it basically wish you the same functional exactly the same functionality and the referral to of' api but in a further independent form and to be from to you further note and the V from to Olaf are fully interoperable so for instance if you have an isp driver that uses before l 2o f in this as a driver that uses with all to further node the two can work together without without the neighbor around and flash devices we can describe flash devices in device three currently that's not a problem but the kernel has no knowledge of which sensor is flash devices related to because typically pop the flash device next to the sensor and if you have multiple cameras in the system which one of them has a flash if you have multiple flashes mr. Kavon one flash per camera or our both of the flash let's relate it to a single camera usually sold but which one is not known so problem property should be standardized to to tell the information to the zero software and what camera module I had been talking about camera sensors and camera modules briefly at least within the past half an hour if what the software sees only is the camera module currently but what we really have is oh there's a camera module I meant camera sensor the software C is really the camera sensor all the sensor part which is a tiny piece in the module itself there is a lot of other stuff in them in the module as well the reason why only the sensor has been visible to software is that there has been little need for the kernel space software to be aware of it it's very important in the user space though if you have a have a system which is controlling the low-level camera image processing algorithms in the user space which is practically always the case if you have if you are using a camera camera sensor and an OSP the information typically that would be needed here is which sensor on the lens are related so that's a very similar problem that which sensor and which flash are related and what kind of lens is it staring the module and what is the voice called spring constant the voice coil lens drivers or controls how the technique there is a the controller is lets you control the current which you are applying on the only voice cold and more currently applied apply the the more the lens is moved off from the live default potted position the default position is usually so the more current you applied closer you get involved then is there infrared filter and if there is a filter what kind of filter it is typically there is a filter sometimes no and what's the aperture size all of this parameters being affected on how how you want your user space algorithms to behave and another aspect in this power on and power of sequences if you look at this diagram here we have a lens oil on here and the sensor here and some of the resources that they need to be need to need to be or for powered on or dedicated either for the sensor or for the lens voice coil but some of them can be shared and if there are shared resources they need to be taken into account when when the device is being voted on so actually you can't define a power on sequence or you cannot implement a power on sequence in general case in software for sensor or for lens voice code you have to implement it for the camera module because otherwise it could be that for instance in this case if you first power on the lens is called now that's that all was fine but then if you later on want to power on the sensor you maybe have maybe you have lifted the rest GPIO and the cop needs to be a neural enabled before doing that and then there is probably a divide between the two you as well so you failed further on powering on the lens so what is needed is basically a power on power of sequence for camera modules all this should be visible in the device tree as well this is this is what I had oh now it's time for questions so you are talking about user space so cannot drivers are cool but what about user space do we have something useful that I'm a colonel programmer so what do you use for testing you've got the images you have something right I use a couple of testing test programs but they are mostly used usable for capturing images algorithms are a whole different sort of problem domain any other questions in the in front I remember there was a specific SOC camera support in video for Linux why it is it doesn't it isn't enough today I know this support has to be faced out potentially are you referring to the associate camera framework if you are the author is given ideally have this casing or at least he's a maintainer of this stuff sorry I didn't quite understand repeat there was specific support for SOC cameras in the video for Linux to why it is not enough today why it is going to be phased out okay okay I believe you are referring to the SOC camera framework that was a framework I did well long time ago in in vehicle to to support very simple devices where you have a single sensor usually not wrong by air sensor this is why you why you be sensor with the small on your speed inside that is already useable images such without control algorithms in the in the host user space and for simple receivers parallel or CSI to that simply received the image data from the camera and write that out to the memory that that's the problem domain for the associate camera framework and the interface is it used to did not really much too well with the referral to what people - in general uses so you basically when you're implementing a central driver or a bridge bridge driver or IP driver you need to choose whether you are going to support the referral - in general or associate camera that's not a very good starting point if you basically should have a class of isp drivers and a class of central driver suddenly you have to plan - for two classes of both and then you need to match somehow so that's that's that's the history of the SOC camera framework so you mentioned that under DS DS device-specific was a lightweight way of registering well I will call them bindings as reference to DT bindings with set of properties for the device you also mention that you have a way to represent a graph of income interconnected nodes that is being sanitized for a CPI but I assume that you're only going through the ACPs sanitization process so that's going to be device base if they can express unity SD right well those are very good questions thank you the current prototype implementation makes use of the the DSD property and hierarchical data extensions which by themselves are part of the a CPI standard but the standard doesn't define how how do you how do you use them so that leaves the question somewhat open the DVD SD property registry we seem intended to be a registry of of properties or how properties are being used by different devices but what is the exact scope and what would be the best way to get this to well get this supported in more general for instance by other operating system perhaps also Windows there are good questions but my point is that if you define those properties that allow you to express the graph as properties in the DSD you can have no guarantee that a given sensor vendor will use that there is no guarantee it is up to the it is up to the so that means that you could have classes of ISPs and since it is 0 it is up to who who will actually write the defender I have one more slide if I can find it here this is as far as I understand this is roughly the logistics of how it works oh I'm not I don't claim to be an expert on either but this is my understanding so on a CD I this Aussie vendor perhaps wise vendor provide the the BIOS to the system vendor and then it eventually gets from to the bias in flash memory meter with motherboard or support websites or something but comparing to the device tree there is no Linux kernel device tree source and on the right side of this picture so we hope that the picture at the left changes and someday it would like the DTS to be distributed by the hardware vendor in the bootloader not in the cab now yeah I understand that's been the hope but but I think we are not there yet that this deserves a very important purpose of providing the right kind of information to the hardware vendors so hopefully the hardware vendors will be looking to this I believe and the binding documentation can be found there as well but what you do there there's not actually a unified opinion on that some people want to distribute the device tree with the kernel some with the hardware and you'll find people with both youth kit use cases and that'll continue for a long time I'm sure if there are no other questions then thank you for listening
Info
Channel: The Linux Foundation
Views: 5,080
Rating: 4.4893618 out of 5
Keywords:
Id: tB6x95N2yHQ
Channel Id: undefined
Length: 42min 28sec (2548 seconds)
Published: Tue Apr 04 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.