Hi, I'm Mike Miller,
Director of AI Devices with AWS, and General Manager
of the AI devices team. In 2017, my team launched
the AWS DeepLens device, designed to give developers
hands on experience with computer vision through
a machine learning enabled webcam. Since then, we've talked
to hundreds of customers who have used that product
to see computer vision in action, and used it to gain intuition
about how computer vision can help solve
business problems. We're incredibly excited
to bring a production ready scalable solution
to our customers that took inspiration
from that product. With that, I'd like to thank you
for joining me as I introduce AWS Panorama, and how companies can use
computer vision at the Edge to improve operations. Let's start with a quick look
at what we'll cover today. First, an introduction
to computer vision, and how it's being used
across industries. Then, we'll explore running
computer vision at the Edge and how it's interesting, followed by an introduction
to AWS Panorama, and how customers are using it
to solve problems. We'll have a short demo, where you can see
AWS Panorama in action, and I'll close with discussing how you can get started
with AWS Panorama. First, computer vision is a term
we use when discussing AI and machine learning to process image
or video in a human like way. So for instance,
we use computer vision to detect or classify
objects in an image, like determining if a car
or truck is present, or recognizing a whale based
on characteristics of its tail fin. Computer vision can also
determine the boundaries of where a detected object
lies inside an image. It can be used for text
and character recognition, pose detection of people, and even identify activities
like hand washing. Companies across industries
have recognized the value of automating
previously manual inspection tasks using computer vision,
developing new innovative solutions, and gaining real time insights
into their business processes. For instance, in manufacturing, we see computer vision being used
to improve industrial processes, by automating
product quality inspection, or tracking the movement of goods
and inventory around the plant floor. With its ability to detect people, computer vision has become
a critical tool for improving public health,
such as monitoring social distancing, and assessing density
of people within a facility. In retail, computer vision is being
used to enhance customer analytics, based on accurate metrics
about how many customers visit, what products
they interact with, and where there are
opportunities to layout, or operations to enhance
the shopping experience. Finally, CV can be used
to enhance worker safety in a variety of locations, helping to prevent workers
from coming into contact with potentially dangerous equipment,
detecting falls, or making sure that hard hats are on
before entering hazardous zones. Customers are getting more excited about these computer vision
use cases. But many are finding
that it's difficult to implement and scale these solutions, especially in sites
where limitations exist in connecting to the cloud
or sending data off site. These customers want to optimize
their computer vision for the Edge. Many use cases require
real time responses to what's being captured
by the video cameras. For instance, a pharmaceutical
company may want to inspect vaccine vials in a very fast moving conveyor
belt to validate fill levels. They need sub second response
time to maintain throughput, where a round trip to the cloud
would be infeasible. Some customers operate
in environments where there are bandwidth constraints
related to cost or infrastructure, or simply
intermittent connectivity. And in these instances,
it's either very expensive to send high bandwidth
video streams to the cloud, or customers simply don't have
sufficient bandwidth to do it. They would prefer
to be able to process all of that video data
at the edge and then only optionally,
send select data back to the cloud. Finally, customers
in regulated industries may need to process
their data at the Edge due to data privacy
or governance restrictions. These customers have corporate
security policies or regulatory requirements
that restrict their ability to send video
and image data back to the cloud. As a result of these constraints,
customers are looking for a solution that allows them to capture
and process image and video data
where it resides, at the Edge. So, we developed AWS Panorama
to help these customers bring computer vision
to the Edge. AWS Panorama is a new machine
learning appliance and SDK, both of which allow organizations to bring computer vision
to their on premises cameras, to make automated predictions
with high accuracy and low latency. With AWS Panorama, companies can use
compute power at the Edge without requiring video stream to the
cloud to improve their operations, by automating
and visual inspection tasks, like evaluating
manufacturing quality, finding bottlenecks
in industrial processes, and assessing worker safety
within their facilities. Let's talk about Panorama components. At the Edge, Panorama supports
devices on premises that are optimized to run computer
vision applications in real time, including appliances that can
connect to existing IP cameras, and run computer vision
on those video streams, as well as new smart cameras
with onboard processing, which can directly run a machine learning application
on the capture video. Customers can use the AWS management
console for Device Management, application development,
and deployment. Customers use the console
to register and manage
their Panorama Edge devices, develop the computer vision
applications needed using familiar AWS tools
such as Amazon SageMaker and AWS Lambda, and deploy these applications
to one or many Edge devices. Panorama takes care
of automatically optimizing machine learning models for
the Panorama Edge hardware, ensuring that applications
run fast without additional
configuration overhead. Finally, after executing
computer vision applications at the Edge,
the application results and alerts can be sent to the on
premises line of business systems, or AWS services
like Amazon S3, Kinesis Video Streams, or CloudWatch,
for extra action and analysis. Let's double click a bit
on how Panorama works and look at each
technical component. Starting at the left, customers who want to improve
operational processes begin with the machine
learning model, either by supplying one they've
trained on Amazon SageMaker, or using a pre-built model from AWS
or third party providers. Customers use the management
console interface to register and provision
their Panorama devices, whether appliances or Panorama
enabled cameras. This flow includes specifying
the network configuration for the device,
downloading that configuration, including the encrypted credentials
to an included USB stick. Your Panorama then reads off
that USB stick and uses it
to get connected, and provision the appliance
to your Panorama account. The management console
also allows customers to pair their trained ML model
with business logic specific to their use case
and integration points. So for example, you might use ML
to recognize pedestrians who come too close
to heavy machinery, and you would use business logic
to trigger an alert, like a siren, and log data in
a facilities management system. The business logic is created
with familiar AWS tools like AWS Lambda, making it easy
to quickly build and iterate. For example, you can process
ml predictions, like only taking action when the model has
a certain confidence threshold, or sending data to a local line
of business systems or cloud
based AWS services. After pairing your model
and the business logic, you deploy this bundle
to Panorama devices. Panorama automatically optimizes
your ML models for the selected Edge device,
removing overhead and complexity in managing multiple
Edge devices. The ML model is then run
on the Panorama appliance or Panorama enabled
camera at the Edge, applying high accuracy and low
latency predictions to video. The application results
processed by the business logic, then integrate with on premises line of business applications
or automation if needed, to route results
to familiar AWS services. So let's talk in detail
about the Edge devices. As we do at Amazon,
we worked backwards from what our customers needed.
And as we did this, we recognize that many customers
have fleets of existing cameras deployed for manual
or reactive monitoring. So we asked ourselves,
how could customers add computer vision
to those existing cameras without needing to touch them
or upgrade the hardware? We developed an Edge appliance
optimized for computer vision applications that can automatically
discover and connect to those existing IP cameras to run
multiple machine learning models. This AWS Panorama appliance
sets up in minutes, and includes multiple Ethernet
ports for redundancy or to connect to different subnets.
Once connected to your local network, it uses the ONVIF industry standard
for discovering and connecting
to existing IP cameras. With an IP62 rating,
it's dustproof and water resistant, meaning it's appropriate for use
in harsh environmental conditions. This way, customers can use
the appliance to bring computer vision to where
it's needed in industrial locations, or they can use the included
rack mounting hardware to mount
these half rack wide units to a standard rack shelf
in a standard server rack. It uses Nvidia's powerful
Xavier AGX platform to run multiple machine learning
models across multiple streams, analyzing but not storing video
from multiple cameras in parallel. Later in 2021, customers will have
more options for Panorama enabled devices across a variety
of manufacturing partners, for Edge gateways and smart cameras
that span a range of form factors, price points and capabilities. We've partnered with the leading
silicon vendors, Nvidia and Ambarella, to support
the Nvidia Jetson product family as well as the Ambarella
CV2X product line. This enables our manufacturing
partners to build Panorama enabled Edge gateways
and smart cameras that can meet customer's unique
Edge computer vision needs. These partners, like ADLINK,
Access Communications, Basler AG, Lenovo, Stanley Security,
and Vivo Tech, are using the Panorama device SDK, which includes a device software
stack for computer vision, sample code,
APIs and tools to enable and test their respective devices
for the Panorama service. So let's talk about
these applications. Currently, 98% of enterprise video
recorded is never analyzed, and the 2% that is analyzed
is usually via human review. Panorama unlocks the insights
in this enterprise video by making it easy to build
and deploy computer vision applications to analyze video
in real time. Customers can start by using
computer vision models, either ones they train themselves, or take advantage
of computer vision models from a variety
of independent software vendors. Even if a customer starts with
a very simple computer vision model, a wide array of use cases
can be unlocked. So for example, a machine learning
model that has been optimized for detecting faces
or people in an image can be used as a basis
for an array of use cases, you could simply count
the number of people seen, which enables crowd counting
and density monitoring applications. You can count people who enter
and exit a doorway or path, giving retail businesses insights
about foot traffic. You can determine when a person
comes too close to another person to monitor physical distancing,
or too close to a restricted area, for example, near heavy machinery,
to monitor workplace safety. Once a person is detected,
you can further determine if they have specific personal
protective equipment on, such as safety vests,
hard hats, or masks. With Panorama,
you then pair business logic with that computer vision model so that you can take action and
integrate results like crowd sizes, or the count of people,
or the workplace safety observations, into enterprise line
of business applications, or other AWS services
to trigger alerts, emails, or take other
real time actions. So let's talk about a few real
customer use cases. Fender Musical
Instruments Corporation is the world's foremost
manufacturer of guitars, basses, amplifiers,
and related equipment. Fender has been using AWS Panorama to improve their guitar
assembly process. There are many unique parts
that go into each guitar, and Fender relies upon a skilled
workforce to craft each part. AWS partnered with Fender to build
and train custom machine learning models that can
identify various guitar parts, such as the guitar neck and
headstock you see on the screen. These models can extract unique
identifiers such as serial numbers, regardless of their position in the
image, recognize the character, and use those identifiers
to track material usage, calculate how long various
assembly steps take, and identify bottlenecks
in real time. Cargill brings food, agricultural,
financial and industrial products to people who need them
all around the world. Cargill is excited about
how computer vision can help them
innovate new processes, and optimize existing ones. First, they're
looking to optimize their yard management
at their greeneries, by assessing the size of trucks
coming into their yard, and determining the optimal
loading dock for each truck. They would do this by training
a truck identification model, which identifies trucks visually
and maybe buckets them by size. They would author business logic that matches these truck
sizes to available loading docks by integrating with
their yard management systems, and using those systems, can direct
trucks to the best dock for them. Cargill also manages large
and complex manufacturing and processing plants, and is excited about
using computer vision to track the movement of assets in these track plants
to remove bottlenecks. With Panorama,
they can build solutions which can scale across many sites. For example, if there are
identical processes in each place at multiple plants,
they can use Panorama to deploy and manage the same application
across each of those sites, allowing them to more easily
scale their optimizations. Finally, BPX Energy
is a division of BP, which oversees onshore
continental US oil and gas exploration and production.
BP is working closely with AWS to build an IoT and cloud platform
that will enable continuous improvement of the efficiency
of their operations. One of the key areas
that will be part of this effort, is the use of computer vision
to help solve issues related to worker safety
and security. They're going to use computer vision to automate the entry and exit
of trucks to their facilities, and verify that they have
completed the correct orders. This could be through
automatic recognition of trucks based on their identification
numbers, or even license plates. They're also excited about
the possibilities for computer vision to help the workers keep safe
in a number of ways, from monitoring social distancing, to setting up
dynamic exclusion zones, ensuring that pedestrians don't get
too close to dangerous machinery, as well as detecting oil leaks. The ability to deliver
all of these solutions on a single hardware platform
with an intuitive user experience is what has BPX so excited. We're looking forward
to what they can do with Panorama. Now, let's have a short demo, where you'll be able
to see Panorama in action. In this demo, my colleague
Fu is using an AWS Panorama developer kit, which is a version
of the AWS Panorama appliance, designed to make development
and debugging of your Panorama applications
more streamlined. He'll start the demo
by registering a new AWS Panorama appliance developer kit, and walk through the creation
and deployment of a simple
computer vision application. Let's get started. AWS Panorama is a new machine
learning service, which gives you the ability
to make real time decisions to improve your operations by giving
you compute power at the Edge. In this demo, I'll guide you through
setting up a Panorama appliance, connecting the appliance
to IP cameras on your network, and deploying
your first application. I will be using the Panorama
appliance developer kit for this demo. Developer kit is not meant
for production use, as it allows root access for
developers to rapidly build and test [INDISCERNIBLE 00:28:56]
applications. Inside the Panorama console, choose Get Started
and choose Setup Appliance. First, give your appliance a name. Optionally, you can
add description and tags to make managing
multiple appliances easier. Second, configure
the network settings for the appliance
to connect to the AWS Cloud. We recommend using Ethernet
for the initial setup. You can configure static IP
and DNS settings in advanced
network settings. Since this is
an appliance developer kit, you can configure SSH access
right in the console. Follow the instructions on screen
to download the configuration file and transfer it to the appliance
using the provided USB flash drive. Your appliance will take
a couple of minutes to configure and connect to the AWS Cloud.
After your appliance comes online, connect to the IP cameras
you want to use with Panorama. The appliance
can automatically discover IP cameras that comply to the ONVIF
type S protocol on the same subnet. Alternatively,
you can manually specify RTSP camera streams on the subnet
for the appliance to connect to. After the appliance
has found your cameras, you can add the cameras credentials
to access their video streams. Your appliance is now ready for use. Let's deploy your first
Panorama application. A Panorama application
uses a machine learning model to make a prediction
about what is in the video, suggest detecting people,
identifying objects, or determining positions
of items in the video. The machine learning model
is paired with business logic code that is customizable to take some
action based on the prediction. The code can connect to the cloud
to trigger an alert, write data to a database, or integrate with business
systems on your network. The machine learning model
and code for this demo is available at our Panorama samples GitHub repo.
Let's give your application a name. Then, let's add the machine
learning model which you would use
to generate predictions. If you train your model
using SageMaker, you can import your models using
the SageMaker training job ID. Panorama supports models trained
using TensorFlow, PyTorch, or MXNet. To use a custom model with Panorama, specify the model location,
name, and data input parameters. Data input parameters
help the machine learning compilers optimize the model
to run on the Panorama appliance. You can even use multiple models
within one application with Panorama. Now that you have imported
your machine learning model, let's work on
your business logic code. In this example,
you use the AWS Lambda console to create
the business logic code to count up the number
of people in the frame and display it on the HDMI output. For production use cases,
you can integrate the machine learning models results with systems
on your local network or in the cloud.
After you finish editing your code, publish the version
from the Lambda console and return to Panorama
to import the code for using your application. You've just created your first
machine learning application with AWS Panorama. Now let's deploy the application
to your appliance. First, choose the Panorama appliance
you want to deploy to. Second, choose
the camera streams you want to feed
into the application. Panorama applications can process
multiple camera streams in parallel. Finally, select deploy.
For debugging purposes, you can view the video
output on an HDMI monitor connected to the appliance. Here I'm showing a sample video
of what you can expect. You can also monitor
the status of your deployment in the panel on the console
or to logs in CloudWatch. Now that you've deployed
your first application, you're ready to start creating
your own Panorama applications. Check out
our getting started guide, or visit our open source
Panorama samples GitHub repo for more
sample code and tutorials. So, now that you've seen
a little more detail about how AWS Panorama works, let's talk about
how our community of partners can accelerate your Panorama usage.
To get started with Panorama quickly, you can work with
our community of partners to acquire Panorama devices,
install them, and build and deploy
CV applications that meet your unique
business case needs. We have a large community
of these businesses who can bring their expertise
in computer vision to help you get
the most out of AWS Panorama. CV model and application partners can help you build
a computer vision models and applications
for your specific use case. System integrators
and consulting partners can accelerate your CV journey by helping you explore
and implement solutions, while distribution
and integration partners can assist you with device
acquisition and system integration. Here's an example. Parkland is Canada's
and the Caribbean's largest, and one of America's fastest
growing independent suppliers and marketers of fuel
and petroleum products, and a leading convenience
store operator. They've partnered with TensorIoT,
which was founded on the instinct that the majority of compute
is moving to the Edge and all things are going
to be coming smarter. TensorIoT and Parkland Fuel are using
Panorama to gather retail analytics that will help drive their business,
such as counting patrons, and analyzing foot traffic
across locations and times, to optimize their staffing,
marketing programs, and product promotions. So how do you get started? Well, first, we've launched
AWS Panorama in preview. You can sign up for the preview
so that you can get started building computer vision
applications right away. Just visit aws.amazon.com/panorama
and look for the preview signup link. Access to the preview
also includes early access to purchase an AWS
Panorama appliance developer kit. The developer kit
makes it easy for customers to rapidly build, test and debug
their computer vision applications. It was designed
as a seamless way for customers to transition their applications,
from development in the preview to the AWS Panorama appliance
when it's available, ensuring that customers can move
from proof of concept to production as easily as possible. Well, that wraps up
today's presentation on using AWS Panorama to bring
computer vision to the Edge. I hope you enjoyed
the presentation. I'm excited to see what our customers
can do with Panorama. So remember to sign up
for the preview at aws.amazon.com/panorama.