Google I/O 2008 - Anatomy and Physiology of an Android

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
and welcome to my talk on the anatomy and physiology of Android can't claim credit for that title but hopefully at the end you'll see that that's what you learn a bit about at the end of the presentation and this goes really for all the presentations at Google i/o we asked you to fill out the surveys that they'll be passing through the aisles so that we can get feedback and constantly improve so that we can do an even better presentation next year all right so a bit about this presentation it's a one on one level but it does assume some basic knowledge of Android if you've been to some of the earlier talks on Android today you probably have a good enough background and it also will cover some of the basics of you know to do with Linux and so you know if you have a background in Linux that will definitely make some of these concepts sink in a bit better it doesn't focus on how to build an application and it's it's not really focused on application developers per se more focused on what happens underneath the app framework and and in the Android platform underneath and it does not include a device demo so while you're all still here it's amazing okay a bit about myself my name is Patrick Brady I'm a technology program manager and our partner Solutions Group and working with Open Handset Alliance members today to deploy Android on handsets so let's get started we'll be going over Android anatomy at a high level we'll walk through the different layers the Linux kernel native libraries Android runtime and application framework and then I'll take you through some Android physiology starting with a walk through the system that startup and how the different processes kind of evolved from the init process and walk through some layer interaction how the applications actually communicate down to the kernel level to deliver the functionality to your apps starting with the Android basic anatomy if any of you have been to any of the talks earlier today or you're familiar with Android at all you've probably seen this diagram come up many times this is our basic block architecture diagram starts with Linux kernel we have the native libraries written in C and C++ sitting on top of those the Android runtime with the dalvik virtual machine the application framework and the application sitting on top I'll walk through each of these layers in a bit of detail calling out some but not all of the components to give you a better idea of how those work let's start with the Linux kernel first a disclaimer Android is based on the Linux kernel or built on the Linux kernel but Android is not Linux we don't have a native windowing system we have no G Lib C support and it doesn't include the full set of utilities that you would typically find on a Linux distro such as Debian or Red Hat or a bun - or something like that so but it is based on the standard Linux - 6:24 kernel we started out on - 6:23 and we've migrated to 24 for some added SDIO support but we'll be holding there for launch and probably hold there until we really have a good reason to go forward and we have a patch of kernel enhancements or really Android specific drivers and utilities that support the rest of the platform so why the Linux kernel if we're not building a Linux system if it's not an entirely Linux system why use the Linux kernel well number one it provides great memory and process management as any OS kernel does it provides the basic plumbing and services to support the higher-level platform and applications it also provides a great permissions based security model this is a security model that's seen 15 years in hostile environments and we thought Linux was just perfect for that it also has you know a proven driver model which allows us you know to provide a great abstraction layer between the hardware devices the peripherals such as LCDs touch screens you know audio devices and the software that's going to use them it also has support for shared libraries and a real pluggable module architecture that allows us to plug things and very easily and best of all it's already open source it has a thriving open-source community that's creating enhancements and moving the kernel along building a new support for new things such as SDIO or SD card i/o as I mentioned earlier and it has you know a thriving community of engineers who are building drivers to support all of the different peripherals that will power these devices so going over the set of kernel enhancements we have we have you know basically that provides timers that can wake the device up from sleep and based on a monotonic time base we have our ash mem or Android shared memory driver that allows applications to share memory and manages this at a kernel level we have the binder which is an open binder based IPC driver that facility facilitates inter process communication we'll go over that in a bit more detail and we have the the low memory killer the power management driver the kernel debugger and the logger we're going to go over two of these in a bit more detail that have gotten a lot of questions and and are a bit I guess different than your standard Linux facilities the binder IPC driver and power management ok binder so why do we need binder well on Android as you may have learned in some of the earlier sessions today and if not you can read about this online er and some of the follow-on talks that you'll see at Google i/o applications and services may run in separate processes so we start all of the applications and separate processes on Android and yet they must communicate between each other and share data typically IPC or inter process communication can introduce significant process overhead and security holes and in architecture and so we needed a solution that was really going to be lightweight and powerful enough to be able to facilitate this design of different applications and services running in different processes so the binder is it's a driver to facilitate IPC its high-performance because it uses shared memory so Apple data between applications not actually copied and marshaled through Java serialization or something like that from one process to the next it's actually shared through shared memory it's it's a the binder driver manages a per process thread pool for processing requests so that services which are registered as binder services or you know IPC servers don't need to worry about all the different threads to start up and manage to receive incoming requests the binder driver handles that for you and it has reference counting mapping of object references so that objects passed between to or an application in a service are tracked and can be cleaned up afterwards when neither processes using them anymore and it supports synchronous calls between processes this is important because it means that you stick to your you know your usual programming model and you don't really have to worry that it's a you know a service running in a separate process you call it as if it's just a function call or a method call on a class in your own process C so let's see binder in action again you have process a with application a and you have process be over here with service B I'm going to introduce one one more object here one another object here called context and this is really going to be an object that's going to provide application a with a reference point or contacts in order to locate service be running in this separate process think of this as like a single node domain name service or DNS service where it's going to provide a binding based on a name to the actual service at runtime when processes start up they're all behind the scenes registered with a context object and so in you know a typical program model the app would just call the contacts to get a service which would return the service and you could call your method say foo passing an object over to the service and that's that's really what it looks like to the application developer or to the to the application developer or an app calling a service at runtime but let's introduce the binder driver because it's a bit more complicated and we actually have this service be running in a separate process and again the binder driver spins up multiple threads in each service process or each binder eyes process to handle incoming IPC requests and this is all transparent to the person building the service so in this case when the process a actually calls you know method foo on the service stub its intercepted by binder which actually marshals the object and passes a reference to of that object over to the receiver the service here and so that you're not against serializing object you know the object passed in this foo method call but you're actually passing a proxy to that same object stored in shared memory and it relays all the the method or relays the method call and the request to the service threads or these IPC binder threads which will then locate the actual object service be running in that process and make the call return the call to the thread which will then pass it back to binder which can you know take back any objects again that are passed back Marshall those back from object references to proxy objects and passing back to application a so it's a synchronous model that allows you to do inter process communication as if it were running in your local process nice thing about this as well is because of the way it doesn't serialize and deserialize objects there's no incremental overhead for calling services that are actually running within your own process and so in cases where you do have services running in the same process and we'll get to some of those later you don't incur any of the overhead with IPC but the nice thing about binder if you're an application developer is that you don't really need to know all the low-level plumbing if you look at the a IDL or Android interface definition language and there's information on that on code.google.com and the reference documentation you can look up how to create services where all this plumbing is handled for you okay moving on to power management or PM there's two facts about mobile devices that are going to hold true for the foreseeable future number one mobile devices are going to run on battery power I don't think you know the solar and nuclear powered devices are really going to maybe a hit any time soon and batteries have limited capacity I'm sure many of you out there in the crowd today brought your Chargers because your batteries have maybe more limited capacity than you would like so Android took a more aggressive approach to power management it does not replace Linux power management to be very clear a lot of people of in the Linux community vaska's why are you building your own power management solution we have power management built into Linux and the kernel well this is not a replacement for power management it's built on top of power management and it's it's really a more aggressive policy that believes that if no no application or service specifically requests at the CPU or the LCD be kept on we're going to shut it down so that your your mobile device will last longer and components can make you know applications and services can make requests to keep the power on through the concept of wakelocks we have partial wakelocks which say you know keep the CPU on but I don't really care about the display maybe I'm playing an mp3 in my mp3 application and I don't really care if the display times Alex I want to save power but I don't want the CPU to shut down because I want the music to keep playing or you may request a full wake lock a full Lakeway clock would say you know maybe I've created an application which is displaying my position on a map and I want the display to stay on at full brightness while my location is moving around so I can see where I'm going and there's other types of wakelocks that we won't walk through here but they're all basic variations on these so if we look at power management in action excuse me you have application a again you have power manager which is really just a higher level abstraction on the power management driver and you have the PM driver here so application a is going to request a wakelock Singh I need a partial wakelock again maybe it's playing an mp3 something like that and the power manager is going to pass down through the the stack and this is obviously a simplified drawing here and return a wakelock so at that point the CPU is kept alive and it's not going to shut down now you're playing your mp3 maybe because this is just a partial way clock after you know say 10 30 seconds whatever the users configured it's going to turn off the LCD if you're not doing something else with the device then when you release the wakelock later maybe the user stops playing the mp3 you give up the wakelock we're going to immediately it you know the power manager will pass down to the driver to to destroy or release the wakelock and we're going to immediately turn off the CPU if there's no other processes asking it to stay alive just for those of you who aren't familiar with with mobile processor architectures that doesn't mean that we're going to shut the phone off and you won't get any calls that's just turning off the application processor and so there's still a modem processor there that will receive calls and be able to wake up but anyway so this is again just a much more aggressive policy on power management where if you don't specifically ask for the the CPU to stay alive we are going to shut it down and this is required because we're taking Linux from a traditionally desktop environment where you have it plugged into the wall and you have virtually unlimited power to a mobile environment where you have you know fairly limited battery life but again as with binder most applications don't need to really work worried about the low-level details they need to you know basically understand the concept of wakelocks and use them carefully you want to keep a wakelock on indefinitely because it's going to keep the cpu going especially a full way clock keep the LCD alive and that will make for a very bad user experience and you have the alternative option of using user activity and specifying when the last user activity was so that if you're doing something like what what Steve showed in the keynote today where the user may not be touching the screen to generate input events they may not be touching any keys and yet when you're moving around and using the accelerometer the application wants to keep the the device alive and so something like that you could use user activity and pass in a time and that will give it it will keep the display alive and keep the CPU alive for some amount of time but underneath it's it's using the same concept of a way clock there's more information on the power manager online on code.google.com and you know for any more information check it out ok so the entire kernel source again is available today at get android comm for any of you who are familiar with git or sometimes pronounced JIT it is the source configuration management solution that kernel.org uses for the Linux kernel I believe that it's some slang meaning an idiot in some British slang but anyway that's what finest or Volga decided to choose and that's what we're following on following our commitment to open source all of Android we're getting started early with the Linux kernel and we've open source that all we're posting patches on pretty much a weekly basis up to get Android comm so you can check that out there ok so moving on sitting on top of the Linux kernel are the native libraries again written in C or C++ these are the libraries that provide low-level functionality and you know computation intensive services to the Android platform I'll walk through a couple of them here in detail to give you a better idea of what what's really powering the platform and I'm going to break these down into or made for major groups here Bionic Lipsy function libraries native servers and the hardware abstraction libraries this is somewhat of an arbitrary grouping but it's my talk so let's see so starting with Bionic see any any live any discussion of the native libraries really has to start with Bionic see this is our custom Lib C or C runtime implementation it's different than what you would find on a standard Linux distribution again like Red Hat or Ubuntu or something like that and then just like power management just like binder it's optimized for embedded use we're always building the platform with the you know with the underlying belief that this is going to run on mobile devices and we need to design for that but why Bionic it there's you know a lot of a lot of overhead involved in maintaining your own C library and we've you know we've gotten a lot of questions from the Linux community about that well number one we wanted to address the licensing problem with the standard seed library so the standard c library is GPL or the general public use license which has a nasty copyleft clause which means that anything that is built with GPL code needs to be open sourced itself we have a lot of people who want to write proprietary code you know that contains intellectual property and keep that as closed source secondly size because Lib sees we'll see later is loaded into every process that has some native component which is basically every process it needs to be small this isn't running on you know a desktop computer with gigabytes of RAM so this needs to be small and lastly it needs to be fast CPU power is at a premium and so we need to be very fast so Bionic we're distributing this under the BSD license or the Berkeley software distribution license it's the code comes from a mix of the different BSD s net BSD OpenBSD FreeBSD these are different C library implementations based on the BSD license which is free of the copyleft clause and so it allows people to keep things as closed source and is a very small size it's about 200k which is about half the size of your standard G Lib C library and that makes a big impact if you're loading this into multiple processes running on a device and it also has fast code paths we have a very fast and cuts custom pthread implementation which uses four byte mutexes instead of this standard I believe 12 byte new Texas and so it makes threading and everything like that very fast we're no we're not we're running on a mobile device we're not going to be spawning off you know millions and millions of threads like you may be on a linux server and so we've scaled this down and make it very tight and fast it also has some built-in support for some Android specific services a quick way to get you know default system properties as well as log capabilities to be able to log a message with priority info or debug and things like that so what some people who are you know big proponents of G Lipsy may consider bad news is it doesn't contain certain POSIX features we don't support C++ exceptions because these are just too heavy for embedded devices we also don't supply support wide characters because we believe that those are you know better handled by international components for Unicode or ICU libraries it's it's not compatible with GNU Lib C that we do we do implement some of the GNU Lib C extensions but not all of them and this means that all native code and again today through the SDK we're not really exposing developers with the ability to build native components but when we do open source the platform all native code will need to be compiled against Bionic or our Lib C if it's going to be compatible with Android the added benefit is if you build something with Lib C you get all that high performance on that compact size okay so moving on to the the functional libraries and again this is this is my naming for them it's not an official name really but these are the libraries that are providing a lot of the power behind the Android platform they're abstracted by the higher-level API is that you'll see in the application framework but this is really where the the heavy lifting is done and we'll walk through a couple of them here the WebKit browser you heard Vic mention not in the in the keynote this morning and I'm sure it's been mentioned in some of the other talks but we really believe that this is the best browser engine out there it's a same browser that's used in Apple's Safari it's used in some of the newer Nokia series 60 devices out there and we really believe it's it's the best browser engine sports full desktop rendering full CSS JavaScript Dom Ajax support so it allows you to build rich web applications and we've added some support for single column and adaptive view rendering to again tweak it for embedded devices but this is based on the same open source WebKit browser that you have out there today really powerful the media framework media framework it's based on packet videos open core platform this is the open source initiative for their core platform and I saw some packet video people in the crowd earlier I'm not sure if they're still here but it's it's a really powerful platform that gives you all of your standard codecs and in file formats so mpeg-4 h.264 mpeg-4 AAC all of these will be provided as software codecs and open source with the platform under the Apache 2 license but you know I heard a question in one of the earlier talks well what if what if some of the codecs aren't supported that you need in your application or what you know what if you want to provide a hardware accelerated codec well we support codec plugins either hardware or software via the standard open max il interface which is an interface defined by the Khronos group it's a standard interface for we're exposing Hardware codecs and so if you have a high-powered chip on a very you know a more expensive phone that supports a hardware mp3 decode that that's going to say if you power it's going to provide better performance and maybe better quality and so that we support plugging those in at runtime sequel light so one of the things we knew we needed when we were building Android was a high-powered transactional data store a relational data store that would give you really that you know the power of building desktop and web apps on a mobile device instead of just reading and writing from flash memory and so we've chosen again another great open source project out there a sequel light that's available under the public domain license and we've integrated this into the platform this really provides the backend storage for a lot of the persistent data on the Android platform your contacts tour and the master SMS is all of these will be stored behind the content provider if you're familiar with those in a sequel light database and now moving on we'll talk about native servers these are processes which are running and really doing some of the heavy lifting to control the input and output devices on the on the platform or running on a device now the first one we'll talk about is surface flinger this is a very powerful server which is really the system-wide composer for Android for Android so it takes surfaces drawn by different applications running in different processes and composes them all onto the single frame buffer to output to the to the you know device display it can combine 2d and 3d surfaces onto a single output source and it really you know makes it easier for all of these different applications not to worry about what they're writing to or how they're being rendered you just send it down to the surface flinger and it will composite them together and get them out on the frame buffer so surfaces our past as buffers VI binder IPC calls so again that's that you know it's a it's a very high-performance way of passing these from different applications and out to the surface flinger and down to the frame buffer we can use OpenGL ES and 2d hardware accelerator for its compositions and just like we have on the media framework you can plug in a hardware acceleration if you have a 3d chip on the device through the open through the OpenGL ES or egl interface again defined by the Khronos group if you go to Chronos org I believe it will provide information there and it does double buffering through it basically paints to the Y plane on the on the frame buffer so if you have a you know a device that is maybe hvga it will allocate twice that for the frame buffer and paint one buffer while it's painting there to the background frame and then pan the device when our pan the frame buffer when that's ready so that you never have buffers written to screen that are partially drawn while the user is viewing them moving on audio flinger audio flinger really is what surface flinger is to the display devices audio flinger is to audio devices and so it takes audio output streams from you know different applications running in different processes that are outputting simple tone audio or maybe game audio you know a race car sound that's being generated by a game or media player audio such as an mp3 and it routes all of these to the various output devices whether that's an earpiece on the device a speaker phone or a Bluetooth headset and it can sit on top of many of the standard audio implementations or drivers on Linux such as also the advanced linux sound architecture or OSS so again this isn't replacing any of the standard Linux audio implementations this sits on top of those and provides the Android specific features to the platform and actually that's a good segue to the hardware abstraction libraries so how do we sit all of this on top of these different Linux implementations many of the OEMs now have handset manufacturers now have Linux distributions maybe on their mobile platforms and they have the drivers to control their displays and they have the drivers to control audio and their touch pads and things like that and so we have a set of hardware abstraction libraries that really provide a better abstraction layer between the hardware and the upper layers of the Android platform give you just an idea of where these would sit I'm kind of changing the overall architecture diagram a little bit to show you that these are native libraries sitting down there in the libraries layer and they contain abstractions for things like audio and camera or Bluetooth GPS the rail or radio interface layer to talk to the modem base man Wi-Fi and and many more again it's a user space see in C++ library it defines the interface that Android requires of the hardware so you know also for audio may implement or may expose a huge interface with all sorts of different things that you can do with audio and our audio hardware abstraction layer definition is just going to define what Android needs of audio so you can plug in alsa or OSS or anything else below and the goal here is to make porting easier we're trying to define a clear set of api's in the same way that we have for application developers through the SDK we want to define those for anyone that will be porting Android once it's open-source onto real Hardware so that you know they don't have to dig through the ten million lines of source code to figure out how to get this working they just implement drivers for these specific interfaces now any Linux gurus out there or people who are paying attention when I said earlier that the Linux kernel provides a great abstraction layer probably asking well why do we have another abstraction layer on top of the kernel well the real reason is and we debated this internally for a while not all components have a standardized interface and in the kernel today so you have things like you know a vibrator driver for your device or an LED driver these aren't always exposed as standard kernel drivers and the other problem is kernel drivers are GPL which exposes the intellectual property like I said that copyleft means that if you you know put anything in the Linux kernel it pretty much needs to be shared as open-source software and we found that as much as we want to ship everything is open-source there's a lot of handset manufacturers and chipset providers out there who feel they have a great deal of intellectual property in their drivers and so they want to keep those closed source so we needed to provide a way to do that in user space outside of the Linux kernel and lastly like I said you know Android has specific requirements of the hardware and we wanted to make sure that those were well documented and defined in interfaces so that people could implement those without having to figure out you know which driver interface and the Linux kernel to use so a simple example of what we're doing in our in our hardware abstraction library headers we just have a header that would define the functions that Android would require I'm using you know a very dumb down example here but for you know if I had some Acme component that I wanted to plug in okay say this is audio we would have a struct that defines your different function pointers and you would have an init function that allows that that Android can call to get a reference to those functions anyone implementing a hardware abstraction library or hardware you know driver and user space for say audio would compile a shared library or you know link this into a shared library that would be loaded on the device at runtime at runtime we would call from Android a deal open and load Lib system lib live acne and bind that to are the functions that we need in the acne component if this was audio you know we would do something system Lib Lib audio and then called get a pointer to the audio functions that we require and this allows us to do very loose coupling between the lower-level Hardware implementation and the rest of the Android platform okay so we've covered the native libraries we've covered you know everything down to the linux kernel and the real magic of the Android platform happens in the layers above this and that's what we'll go into now starting with the Android runtime so the Android runtime sits in on top of the libraries in the Linux kernel and it provides number one the dalvik virtual machine and the core libraries here written in blue because they are exposed through the Java programming languages excuse me so the dalvik virtual machine remember Android is not Linux we don't have a native windowing system all of the applications and services that you run will be running inside of a virtual environment powered by the dalvik virtual machine this is very important because it provides application portability meaning that an application written in the SDK will run on different devices that maybe have different low-level Hardware implementations are compiled with you know different tool chains and things like that they will all run on the same on the same virtual environment and it does not important thing to call out it does not run Java bytecode so dalvik runs an optimized byte code called decks or dalvik executable and this is very very tuned again for the embedded environment the bytecode it support web sorry the dalvik virtual machine it's again it's lightweight like Lib C and so it supports running the virtual machine in all different processes on the same device we needed to make this very lightweight there's a great talk dalvik VM internals that are lead dalvik engineered and Bornstein we'll be giving at Google i/o and I encourage you to check that out to learn more about it but it's highly CPU optimized bytecode interpreter and it uses runtime memory very efficiently data structures are optimized to be passed between processes it uses fixed with data structures so that parsing time is much faster so it's really just optimized for embedded use and sitting on top of that it are the core libraries you know underneath you have this extremely powerful VM and you have all of these libraries exposing OpenGL for 2d and 3d you have the you know the powerful audio flinger and things like that but we wanted to expose to application developers a very simple set of API s that they were familiar with and so the core libraries provide you with the standard data structures utilities file access network access graphics that you're used to in the standard java programming language okay now moving on to the application framework again the application framework sits on top of the Android runtime it's written in blue again signifying that this is all written in the Java programming language and it contains the all the classes and core system services that will be used to build applications whether those are the core apps that are built you know by the Open Handset Alliance and will be open source today or it will be open source in the future such as the whole map and the dialer app or whether it's apps like pac-man that you saw earlier today in Steve's demo they all have access to this same set of application framework api's it provides Java language bindings for the native libraries and so we have you know an abstraction for OpenGL ES and things like that that run in the application framework see so moving on to the core platform services these are really part of the application framework and these provide the services that are essential to the Android platform that manage application lifecycle that you know manage packages and loading and resources and things like that and they're working behind the scenes you typically don't interact with them directly when building an application but they're running behind the scenes and they really make the entire Android platform tick so starting with those we have the activity manager it manages the application lifecycle for all activities tells them when to start up when to shut down or not shut down but basically be persisted to memory so we can reclaim space it maintains a back stack so that you can navigate through applications in the same way you do on the browser going from one application to the next or the next going from an email clicking on a link going into the browser clicking on you know and address in the browser and going to the Maps application and then click back just like you do in your browser to have seamless navigation right back to the email you started in moving on in the package manager this is used by the activity manager to load information about all the apk files or Android package files on the device and these are just archives that include activities and intent and intent receivers and things like that or services and the package manager will tell the rest of the system what applications are loaded on the device or what packages and what the capabilities of those packages are and we have the window manager this sits on top of surface flinger and basically handles all of the various windows drawn by different applications manages the Z order of those so that you know which ones should be showing in front of the others and again sits on top of the surface flinger passes everything down to the surface flinger to render on the device the resource manager handles all non code resources in an application or service so these are things like externalize text strings or you know an image file that you may include or audiophiles things like that content providers these are components that allow you to share data between applications and so you know we have a content provider that provides access to the contacts database on the device and you can share that between different applications so they can all have access to that data again many of these are backed by a sequel Lite database but it may be backed by something else a network storage or flat file access we have the view system which provides all of these standard widgets like buttons and you know a date picker view and the map view and things like that for to build applications a rich set of widgets there and also handles all of the layouts so that you can have a button inside of a table inside of you know a frame layout on an application so it manages the view hierarchy and makes sure that each of these components is rendered in the in the appropriate space okay but we also have a whole list of hardware services things that aren't maybe intrinsic to the Android platform but provide you access to all of the lower-level hardware features on the device so we have a whoops there typically access through a local manager object through again the contacts as we saw before so you would just grab like location manager here grab it as a system service from contacts and cast it so then you have a handle to the local the local stub so you have things like the telephony service which handles all which handles all communication with the baseband radio through the radio interface layer you have the location service which handles all access to the GPS device Bluetooth service Wi-Fi service USB service all of these services will be accessible to application developers so that they can build rich applications which make full use of the device including a sensor service that does can handle some of the things like Steve showed this morning with a compass or an accelerometer to allow you to do really cool things that you don't see on mobile is very often okay so the application framework there's a lot to cover and we're not going to go over everything we'll just concentrate on those few components there's more information at Google i/o if you want to check out dan morals talk on inside the application framework and online at code.google.com Android okay yeah should have turn that off okay let's see so we've covered you know the Linux kernel the native libraries the Android runtime the application framework the last piece to go over there is the applications but we're not going to spend a lot of time on that today because we have some other things to go over but basically you know the applications sit like I said on top of the application framework they have access to all of these core system services hardware services and the the core application framework to build great applications so that the thing to stress here is that the applications we're building for the Android platform including the home the dialer the SMS app the email app those are all using the same services in the same SDK that's available to application developers to build other services or other applications okay a quick walk through of Android physiology I will start with the startup walkthrough and look at what the system looks like and how it builds up from a single process out to the multi process environment we have in Android like any Linux based or UNIX based system at startup the bootloader is going to boot Linux and it's going to kick off the init process this is similar to how any any Linux system really starts up the first thing in it is going to do on Android is start some low-level processes called Linux Damon's and these typically use to handle things like low-level hardware interfaces they would sit on top of the hardware abstraction layer and run and listen on sockets for things like USB connections or you know an Android debug bridge or a DB connection the debugger connections and also the radio interface layer daemon which will sit on top of on top of the radio baseband an interface with the basement modem after starting up the Linux standards and will collapse those in the corner of the screen here to save some space the init process starts up a really neat process called zygote and as its name implies zygote is really just the beginning of all of the rest of the Android platform and so zygote is a nascent vm process it initializes a dalvik VM and preloads a lot of these libraries and it Forks on request to create new VM instances for manage processes again every application is going to run in its own process and we don't want to do a cold start on the VM to start those up it uses copy-on-write to maximize reuse to maximise reuse and minimize footprint so that data structures are shared and it won't do a full copy unless some of those data structures have been modified and it needs to so the next thing that in it does is it starts up the runtime process and the runtime process does one very important thing it starts the service manager and this is really the context object that we had talked about earlier it provides the DNS or local lookup service for all of the services and controlled through binder and it registers that service manager as the default contacts for binder services then the runtime sends a recess okay I'm ready to really start managing processes let's go it sends a request over to zygote asking it to start up the system server process so zyv zygote splits and starts up a new dalvik VM with the system server this is the first managed process that started up and it's really brought us you know up into the blue layer on the on the Android stack diagram the first thing system server does is it starts up surface flinger and audio flinger to be able to control the display device and the audio output devices like we saw and that's where you'll see you know the the spinning or the back-and-forth red dot on the Android emulator these are going to register with the service manager so that they're available for other applications and higher level services to to call on them and again we'll collapse those thing that the system server is going to do is start up all of those core platform services and hardware services that we talked about earlier so the content manager the cleffa new service the activity manager the window manager and it's starting all of those up within this same process in the same way these are going to register and they actually register through a Java proxy object which is basically a pass down to the service manager here so it's a job of representation and they register and say ok I'm the activity manager and I'm ready to accept calls on the you know package manager and I'm ready to accept calls from other processes so at this point you have more or less the full stack ready that needs to be there for the first activity to start up this is where we sit in terms of processes we have the init process we have these multiple daemon processes running the runtime process our original zygote which will stick around and continue to listen on a socket for any more requests to fork off and create new VMs and we have the system server process which has all of our core services and platform components and at this point we're ready to launch the home application or the idle screen as many of you may know it and so activity manager will actually send a request to zygote and say alright I want to start an activity that subscribes to the main intent or the home intent and that will cause I go to fork off and load this new process with the home activity and a new dalvik VM an important thing to note here I can see I'm actually missing one on my diagram but Lib see is running in every process and we now have three dalvik VMs running in every process and so this just goes back to reinforce what I was talking about earlier it's very important that everything in the Android architecture was designed to be very compact and efficient so that it can run in a multi process environment so when the user maybe loads home and then hits the contacts app that will again start off a new dalvik VM new contacts app and load a new version or you know instance of Lipsy running in that process okay so we've shown how how that builds up at runtime let's take a look at how the layer interaction works down through the applications down to the kernel there are three main flavors of android layer cake that we'll go over today slight variations on this but but we'll concentrate on these starting with an application talking to a runtime or a managed service and down to a native library so an application would make a binder IPC call again through the context object behind the scenes to a runtime service running in the application framework we typically have on a lot of these runtime services will have a native service binding or a native component to that service which is connected through the Java native interface or J&I and this gives the native service binding a way to access these hardware abstraction libraries that we talked about earlier so that it can do a dynamic load and load you know Lib Acme Esso which can then talk to the Linux kernel a good good example of this in our android architecture is a location manager service where if it's not using a software implementation or a network-based location it's using an actual physical GPS chip on the device it has the GPS location provider which is you know has two halves the manage half and and the native half communicating through j'ni does a dynamic load on the lib gps which then communicates to the kernel driver to talk to the gps chip so the second variation of layer cake that we have in android is the app calling a runtime service calling a native service than calling a library so we have the same basic structure that we started with earlier except now we're doing another binder IPC call to a native service which will do the dynamic load and talk to the kernel driver a great example of this is the media player you have a media player in the android framework which you can load to play you know stream resources off of the internet or play local local media files it's connecting down through j'ni to a native implementation of the media player which will talk to the packet video open core framework but it will also make a call out to audio flinger over inter-process communication in the binder which will actually load lib audio and talk down to the Linux kernel but again lib audio is really where we draw the line of what the Android platform is and so other architectures can implement also if the particular Linux distribution running on the device has you know an also driver to expose the audio or it could call a proprietary Audio Driver maybe someone has a lot of IP tied up in their audio they have you know voice cancellation and other things that they don't want to expose Android will sit right on top of that and as long as it's excuse me expose through the same interface as we talked about earlier we'll be able to call that all the same and the last one which I'll try to go over quickly so we have some time for questions is an application talking to a runtime service and again talking to the native Damons remember those Damons that we started up when we were the init process started up when we were initializing the system so again the same basic architecture from the application perspective it doesn't really need to know anything that's going on underneath and here we're talking over sockets to a linux daemon now why would we choose a daemon here instead of a server well this is typically used in cases where you have a very simple protocol and you're not passing objects back and forth that need to be you know tracked for references and things like that you're just talking over over a socket using a simple API and the daemon here in this case will launch the hal library and talk to the kernel driver great example of this is our telephony stack so the telephony stock talks down the telephony manager which is actually a collection of api's which we've kind of grouped here but talking over a socket at the native layer to the real demon or the radio interface layer daemon which will talk to the base and so Android today can run on you know a Marvel chip or a Qualcomm chip or a Broadcom chip and we don't really care what the underlying modem connection is we just talked to the real daemon and Load liberal da so and so anyone building a new interface to the to a new or a new radio can just drop that right in okay so that covers it for the flavors of Android layer cake we've walked you know through all of the components in the Android architecture and we've talked about how those interact at runtime and so that's pretty much it we can open up for Q&A we have about six minutes left there's microphones one one on this side and one in the middle for anyone that wants to ask questions a question for you on the media libraries when we're running the emulator are we running the real pack of video media libraries or are we running something to kind of shortcuts through to Windows or Mac oh it's a good question I believe you're running the real packet video media libraries Android you know that the real thing to point out there is that we've packaged an emulator that emulates an actual ARM architecture processing environment it emulates what we'll be running on an actual device rather than a simulator which would just make it look and feel like it's Android but actually run something different underneath and so the emulator will run the same code that you see on a device and you know you should have roughly the same performance and in some cases we've seen better performance running on actual Hardware so if we notice a problem with that like the I wrote a program you press a button it plays a note and there's a delay that indicate a real problem with the method pack of video codecs or whatever in that data that's probably it's probably related to that the main thing here remember is that we've we've published an early access look at the SDK and so a lot of this stuff for example that Steve was during the keynote speech today is far advanced beyond the the previous SDK which we've launched so I think a lot of those issues have been worked out and you'll see those in the next SDK release hi I think I have like kind of the same question but it's a little bit different when we check the code for that the left phoneme on ear and m3 version for example is like some messed up functions and then Phi they say some more data but it's not real implementation so when you mention they live real Torres so file there the source code for that or that least the mock-up driver for that would be part of the 2.6 24 the latest current release that is on the GTA so that's a good question so remember that Android we really you know those shared libraries that we load at the native layer those could be any number of different things and so you may have a rill that's based on you know a Qualcomm modem you may have a rail that's based on a marvel modem or something else and what Android will provide when we open source will provide a reference implementation that will basically give the basic feature set but may not be commercially ready at that layer and so other people for commercial distributions will actually plug their specific drivers in yes the the reference will it's it actually sits in user space and so it's we have an open source that yet today okay the second question would be like what will be the difference if you use in small as opposed of DL open like you had on your example there it should be the same great to be honest I don't know the details of that because we tried it and it seems to work but I just sort of deal open was like well maybe yeah wrong our engineers are looking at that and this you know what we've published here as part of this talk you may find things change before we actually open source of code all right but that's very good question last question you had indicated that the dalvik VM was written optimized for the CPU can you give us any insight into which CPU architectures are currently it's optimized for an ARM architecture I don't know which specific on an architecture we rely on for dalvik specifically again as I said Dan Bornstein is giving a talk and so he'd be best a best answer that but we do have people looking at ports to other architectures okay and that's it I'll I'll be around up front the penguin has some follow-up questions do you need
Info
Channel: Google Developers
Views: 113,089
Rating: 4.944056 out of 5
Keywords: Google, I/O, IO2008, Android, gcvio052008, plid6D9B701069B4F2F4
Id: G-36noTCaiA
Channel Id: undefined
Length: 58min 27sec (3507 seconds)
Published: Mon Jun 09 2008
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.