[MUSIC PLAYING] JOSH GORDAN: OK, so we have
a bunch of cool topics, and we'll definitely
hit the Getting Started resources in a
moment, but because we talk about mnist a lot,
and very basic problems in computer vision a lot,
I wanted to actually start the talk by sharing a couple
of my favorite recent examples, all of which have complete
code in TensorFlow 2.0. So even if you're
new to TensorFlow, and these concepts can
take a long time to learn-- like to talk about neural
machine translation, you would need to take a class-- you can still try the code. And most of them are
50 lines or less. And they run very quickly. So anyway, they're great. Anyway, TensorFlow 2.0
is currently in alpha. It is all about ease of
use, which is the number one thing I care about. And it's ease of
use at all levels. So both for novices that
are totally starting out, as well as PhD
students that want to be able to do their research
in a slightly easier way. I'm not going to spend too
much time on this slide. I just wanted to call out
the most important thing about TensorFlow is the user
and the contributor community. And as of now we've
had over 1,800 people contribute code
to the code base, which is huge if you think
about how many seats that would fill up here. So thank you very much to
everyone who has contributed, and the many more
who have done docs and have done teaching and
meetups and stuff like that. So it's super cool. Anyway, the alpha of TensorFlow
2 is available today. Also going to fly
through this slide. And what you should know is,
every single example linked from the following
slides will automatically install the correct version
of TensorFlow at the very top. And they'll all run in
Colaboratory, which is the best thing since the microwave. And if you run it
in Colab, there's nothing to install on
your local machine. So if you want to try out the
latest version of TensorFlow 2.0, you literally
can follow these links and with a single click
you'll be in Colab and you're good to go. So before we get
into Hello World, I wanted to very quickly talk
about things like Deep Dream. So what deep learning is, is
really representation learning. And we don't have
as much time as I'd like to go into that
in great detail. But there's been a couple
pieces of really amazing work in the last few years. And on the left we're seeing
a psychedelic image generated by a program called Deep Dream. And what's really
interesting, I just wanted to say that
the goal of Deep Dream was not to generate
psychedelic images. It was to investigate how
convolutional neural networks are so effective at
classifying images. And it was discovered that every
neuron in every layer of a CNN learns to detect different
features when you train it on a large corpus
of training data. And you can use TensorFlow
to write a loss function to try and maximally excite
one of those neurons. So anyway, the TLDR is, the
fact that we can take an image and supersaturate
it with dogs is possible as an artifact
of training a CNN on a large database of images. Anyway, there's a notebook
there where you can try it out. It runs very quickly in Colab. And like less than
50 lines of code. It's surprisingly short. Once you know the idea, the
implementation in TensorFlow 2 is super easy. On the right is
an example called Style Transfer, which
I'm going to fly through. But it comes from
exactly the same idea. It's given that we've
trained an image classifier on a very large amount of data. What else can we do with it? And it exploits very
similar ideas, Deep Dream. Another really,
really cool example that I wanted to share
with you is for something called neural
machine translation. And this is like
50 lines of code, give or take, that
out of the box will train you an English
to Spanish translator. And the only input you
need to provide-- this is totally open source. We're not hitting the
Google Translate API. The reason I like
this example, it's all the code you need to
implement a mini version of Google Translate. The simplest possible
Hello World version of Google Translate can
be done in 50 lines. And the one thing I wanted
to mention very briefly is I'm a little bit
envious of people that are bilingual and trilingual. I have English and barely
high school Spanish. And so translation has
always interested in me. And the way that the
translation example works, it uses something called a
sequence to sequence model. Briefly, what that does is it
takes a sentence in English or the source language,
and it maps it down to a vector, which you can
actually see and print out in the code and explore. And that vector is just
an array of numbers. That's called an encoder. There's a second
neural network-- it's two natural networks
working parallel-- there's a second network
called a decoder. The only input to the
decoder is that vector, and the decoder
produces a sentence in the target language. And what this means is that
the encoder takes a sentence and it compresses it. So before, we saw
that deep learning can be seen as
representation learning because as an
artifact of training a network on a large
corpus of images to train an image
classifier, we find neurons that learn to recognize
different shapes and objects and textures. And here we can think
about deep learning as compression, because we
can take a sentence which contains a huge amount of
information and map it down to a short list of numbers. And the reason I wanted to
mention this trilingual aspect is you can actually come
up with something called an interlingual representation,
just by modifying this example. Say we wanted to translate from
English to Spanish and also from French to Spanish. Just by modifying
this code to include a small number of French to
Spanish sentences in the input data that you
provide, that vector that comes out of the encoder
will encode a sentence either in English or in French
into the same representation. And what that means is,
you're finding a way to represent concepts
independently of language. And I know this is a
little bit out of scope for Hello TensorFlow World,
which we'll get to in a sec. But the reason I want to
mention things like this is there's incredible
opportunities at the intersection of deep
learning and other fields. So for example, if
you were a linguist, perhaps you're not
super interested in the details of the
TensorFlow implementation, but what you could do is
look at this interlingual representation that we
get almost for free, and investigate it. And right there, that would be
a killer PhD paper or thesis or something like that. And so when we put
these fields together, we get super, super cool things. I'm taking way too long. I'm going to skip this entirely. I just wanted to mention
we can also take an image and using the same encoder
decoder architecture, map the image into a vector
and reusing the decoder, which maps from vectors to sentences
that we have in the translation tutorial, almost
copying and pasting it, we can learn to
caption an image. And this tutorial has
a little bit more code, but it's the same idea. It's absolutely mind-blowing. So there's a lot of value in
learning about deep learning, and it has a lot of
potential impact. Also we have two really
excellent resources for total beginners. The first is linear regression,
finding the best fit line, which is probably
the most boring thing ever, but it's a perfect way to
learn about gradient descent and even back propagation
if you wanted to. And what's awesome is
the gradient descent and back propagation
concepts that you learn about in linear regression are
exactly the same concepts that apply to every other
model you see in TensorFlow. And then, as always, we have a
great collection of Hello World examples on the website. And we just launched a Udacity
course and a Coursera course that will go into
the Hello World examples in much more depth
than we have time for here. So Paige-- who I've learned
a lot from, by the way-- is going to tell us about the
differences between TensorFlow 1 and TensorFlow 2. PAIGE BAILEY: Absolutely. Thank you, Josh. So-- [APPLAUSE] Excellent. So who here tried to use
TensorFlow around 2015, 2016? Show of hands. So a few of you. And if you tried using
it then, you probably saw something very similar
to what you see on the screen there. And if you're coming from
the scikit learn world, this doesn't feel very
Pythonic at all, right? So you're defining
some variables. You're assigning
some values to them, but you can't
immediately use them. You have to initialize
those variables. You have to start these weird
things called queue runners, and you have to do it all
from this sess.run statement. So creating sessions,
defining variables, but not being able to
use them immediately, this wasn't straightforward. And the user
experience, we found, with the original
version of TensorFlow left a lot to be desired. So we've learned a lot
since TensorFlow 2.0, or since TensorFlow
1.0, I should say. We've learned that there's a lot
to be gained through usability. So we've adopted Keras as
the default higher-level API. We've decided to pursue
eager execution by default. And this means that you
can do crazy things, like add two numbers together
and immediately get a response. We've also made a big
push towards clarity by removing duplicate
functionality, by making consistent intuitive
syntax across all of our APIs. So instead of having thousands
of endpoints, some of which do very similar things,
and none of which have standardized
conventions, now everything feels a little bit
more consistent. And instead of having a
large series of tf.foo, we now have things like
tf.signal.foo or tf.math.foo. We've also decided to
improve the compatibility throughout the entire
TensorFlow ecosystem. So instead of having
multiple ways to save models, we've standardized on
something called saved model. And I think we'll see an
architecture slide later in the presentation
speaking to that. We've also made a huge
push towards flexibility. We have a full
lower-level API, so if you like using the lower
level ops you're still having that capability
we've made everything accessible in tf.rawops. And we've had inheritable
interfaces added for variables, check points, layers, and more. So if you like staying at a
high level, we support that. If you want to go
a little bit lower, you can do subclassing
with Keras, and if you want
to go even lower, you have access to
the full capabilities with TensorFlow Raw Ops. This is all with one API. And I think to talk a
little bit more about it, Josh is going to mention
how much we love Keras, and love its subclassing
capabilities. JOSH GORDAN: Which
is completely true. Thank you so much. OK, so I know I'm talking fast,
but we have a lot of content to cover. Basically one of the huge
changes of TensorFlow-- well, we did this technically
in TensorFlow 1.x, but this is the
standard for 2.0. So we've standardized
on the Keras API, and we've extended it. Briefly, because we might
not get to the slide, if you go to keras.io, that is
the reference implementation for an open source
deep learning API spec. Keras is basically an API
without an implementation. It's a set of layers
that describes a very clear way to implement
your neural networks. But traditionally, Keras runs
on top of other frameworks. So if you do PIP install Keras,
you get Keras with TensorFlow behind the scenes, and
you never see TensorFlow. And this is a perfectly
fine way to get started with machine learning. In fact, you can
do like 90% of what you need to do just with that. It's phenomenally good. In TensorFlow, if you do
PIP install TensorFlow, you get the complete Keras
API and some additional stuff that we've added. There's no need to
install Keras separately. Briefly, I just want
to show you two APIs. And it says for
beginners and experts, but you can do 95%
of ML, including some of the cool
examples I showed you, with the beginner's API. The beginner's API, this
is called Sequential. And we're defining a neural
network as a stack of layers. And I know people
that I've worked with deep learning before have
almost certainly seen this. There's a couple
important points. You might not
realize it, but what you're doing here when you're
defining a sequential model is you're defining
a data structure. Because your model
is a stack of layers. Keras or TensorFlow,
depending on which way you're running this,
can look at your layers and make sure
they're compatible. So it can help you debug. And what this means is if
you define a model this way, you're not likely to have errors
in the model definition itself. Your errors are going
to be conceptual errors. They're not going to
be programming errors when you define your model. And that's very valuable. Here's how that looked
in TensorFlow 1, and here's how that
looked in TensorFlow 2. Or here's how that looks now. So this hasn't changed at all. And if you're familiar
with it, great. We've added a second
style, and this is called model subclassing. And I love this, but
it's very different. So this basically feels
like object oriented NumPy development. So many libraries do
something similar to this. The idea came from
Chainer a few years back. And what we're doing is we're
extending a class provided by the library. Here we call it model. In the constructor-- so if
you're coming from Java, this will be great--
in the constructor we define our layers. And in the call method we define
the forward pass of our model. And what's nice is the call
method is, in TensorFlow 2, this is just regular
imperative Python, exactly how you would
always write it, and it works the
way you'd expect. This makes it super
easy to debug, and my friend Sarah, she works
on custom activation functions. If Sarah wants to quickly
try her activation function, you can write it as
you would expect. And this is a huge
difference from TensorFlow 1. This is phenomenal. For people reading
at home, you can look at the slide
and the article link from that to learn a
lot more about that. Anyway, both types of models,
if you're familiar with Keras, can be trained using
model.fit, as you always would. Or if you would
like, you can use what we call the gradient tape. And so this is a perfect
way, if you're doing research or if you're a
student and you want to learn about back prop
and gradient descent, if you'd like to know what the
gradients of your loss function are with respect to the weights,
you can simply print them out. If you print out
grads there, you will just get a list of showing
all the gradients, which makes them extremely easy
to modify and log and debug. It's great. So this style of code gives
you complete flexibility, which is awesome. But you're much more likely
to have programming errors. So basically, if
you do model.fit, it works out of the box,
it's fast, It's performing, you don't have to
think about it. You can focus on the
problems that you care about. If you wanted to research
and write from scratch, you can, but there's a cost. The other cost that
I wanted to mention is actually tech debt,
which is not something you might think of off the bat. So deep learning aside, if
you implement your model using the sequential
API, I can look at any code written that way. For instance, if I'm helping a
student debug, and immediately see what the bug
is because there's a standard, conventional
way to write it. If I have students that
write code this way and they come to
me with a problem, it can take me 15
minutes to find it. And if you think
about what happens to code that you
write in a company, if you have deep
learning code that lives for five years
worked on by 50 engineers, there's a huge
cost to this style. And so I know this is obvious,
but basically software engineering best practices
apply to deep learning too. So we have the style, but
use it when you need it. More details on Keras
versus TensorFlow. Briefly, another
really awesome thing about Keras API in TensorFlow
is distributed training. So most of the work that I care
about happens on one machine. Like, what I'm
personally excited by is like a really clear
implementation of GaN. A friend of mine is
working on cycle GaN. We'll have a tutorial
available soon. But for people training
large-scale models in production, we've greatly
simplified distributed training in TensorFlow too. So here's a Keras model, and
these are the lines of code that you need to run that model
using data parallelism on one machine with many GPUs. And that's it. So assuming you have a
performing input pipeline, which, to be honest,
takes time to write and is an engineering
discipline, but once you have that done,
distributing your model is very easy. And we're working on additional
distribution strategies for different machine and
network configurations just to encapsulate
this logic so you can train your models quickly
and focus on the problems that you care about. Another thing that's really
special about TensorFlow is I want to call out
the documentation. So if you visit
tensorflow.org, this is just a screenshot of
one of our tutorials. You'll see most
of the editorials have these buttons at the top. One is view on GitHub, which
will give you the Jupyter notebook. The other is run in Colab. And all of the tutorials for the
alpha version of TensorFlow 2 run end to end out of the
box with no code changes. And this is important because
it means they're easy to use and they're reproducible. So what they do is they
install the right version of TensorFlow. They download any
data sets you need. So in this GaN example,
they'll download-- I actually forget the
name of the university. The paper, I believe,
is from Berkeley, but I'm not sure that's
where the data set is hosted. Anyway, we thank
them in the tutorial. They'll download the data set. They'll train a model. They'll display the
results that you see, and from there you have a
great starting point that you can modify and hack on. OK. And now Paige will tell you
more about TensorFlow 2. PAIGE BAILEY: Excellent. Thanks, Josh. So as we mentioned
before, we've tried to standardize and to
provide compatibility throughout the entire
TensorFlow ecosystem. If you were here a
little bit earlier, you saw a really cool demo
from the TensorFlow Lite team, where you had
object segmentation and you were able to
have a human being dance and immediately have their
body shape sort of transposed and make it look
like I could dance, even though that's usually
not so much the case. The way that this
is standardized is through something
called saved model. So with historic TensorFlow,
so TensorFlow 1.x, there were a variety of
ways to save your models. And it made it very,
very difficult to port it to different locations where
you might want to use them. So for example, mobile or
embedded devices or servers. Now, as part of the
standardization with TensorFlow 2.0, you can take
your saved model and deploy it to TensorFlow
Serving, TensorFlow Lite for mobile and [INAUDIBLE]
embedded devices, TensorFlow JS for deep learning in the
browser or on a server, and then also for other
language bindings. We offer support for Rust, for
Java, for Scala, and many more. So as I mentioned, you can
use TensorFlow on servers, for edge devices and browsers. We have an entire training
workflow pipeline, including data ingestion and
transformation with TF data and feature columns. From model building with
Keras, premade estimators, if you would still like to
use those, and then also custom estimators or
custom Keras models. For training we've defaulted
with eager execution, so you'll be able to get
your responses immediately instead of doing that
frustrating thing with initializing variables
and starting queue runners, as I mentioned before. You can visualize all
of it with TensorBoard and then export again,
as with the saved model. If you haven't already seen
it-- and this is really, really cool-- TensorBoard is now offered
fully supported in Google Colab so you're able to
start it to run it to inspect and to
visualize your models, all without those sort
of frustrating local host commands. This support is also additive
for Jupyter notebooks, so if you want to use this
with Google Cloud instances or with Jupyter notebooks
locally, you absolutely can. We've also included built-in
performance profiling for Colab. This is for GPUs
and also for TPUs. So you're able to understand
how your models are interacting with hardware and then
also ways that you can improve your performance. We heard you loud and clear
that the documentation could be improved, so a
big push for 2.0 has been improving
documentation, adding API reference docs. We'll also be having a global
docs sprint in collaboration with our Google developer
experts and GDC groups later this year. And if you're interested
in collaborating for documentation, we
would love to have it. Please submit it as a pull
request to TensorFlow/examples on GitHub. We also understand that
developers and researchers need great performance,
and we did not want to sacrifice that as
part of TensorFlow 2.0. So since last year, we've
seen a 1.8 training speed up on NVIDIA Teslas. That's almost twice as fast
as some earlier versions of TensorFlow. We've seen increased
performance with Cloud TPUs, and then also great
performance in collaboration with our partners at Intel. Not just for training, though. If you're interested in
inferencing for TensorFlow Lite, we've brought down
the speed for edge TPUs to just two milliseconds
for quantized models. So underneath the hood,
TensorFlow Lite and TensorFlow are all about performance. We've also extended the
ecosystem in a number of ways. When TensorFlow was first
open sourced in 2015, it was a single repository
for numerical computing. And now it's grown into
an entire ecosystem. So if you're interested
in Bayesian modeling, you can use something like
TensorFlow Probability. If you're interested in
reinforcement learning, you can use TensorFlow Agents. If you're interested in text
processing you can use TF Text. If you're interested in
privacy or insecure computing you can use
Federated or Privacy. We also have a variety
of other projects-- about 80 right now. And if you're interested
in any of them, I strongly suggest
you go and take a look at the TensorFlow GitHub. And now the question
I'm sure all of you are very, very interested
in is how do I upgrade? All of this sounds great. How do I make sure that all
of my historic legacy models continue to run
with TensorFlow 2.0? And the answer is,
we've tried to make it as easy as we possibly can. We have an escape to
backwards compatibility mode as part of tf.compat.v1. We have migration guides and
best practices that have all been placed on
tensorflow.org/alpha. We've also created something
called tf_upgrade_v2. It's a conversion
utility that you can run from the command line. And it takes your existing model
and imports the code to 2.0. Not to 2.0 syntax,
but it makes all of the changes that would
be required in order for it to run compatible with 2.0. So what does that look like? All you would have to
do is take your model, export it as a Jupyter
Notebook or as a Python file, and then run it from the
command line with tf_upgrade_v2, the input file name, and
then what you want the output file name to be. You can do this even for
a directory of files. The files then cycle
through and you get something that looks
a little bit like this. It's a report.txt
file that tells you all of the API endpoints
that have been renamed, all of the keywords
that have been added, and then all of
the instances that need to have this escape to
backwards compatibility mode. So prefixed with tf.compat.v1. Once all of that's
done, you should be able to run your
model as expected and see no performance
regressions. And if you do,
please file an issue. That's a bug, and we would
be delighted to resolve it. I'd also like to highlight
one of the projects that our Google
developer experts has created called tf2up.ml. So if you don't want to
run the upgrade utility from your command line, you're
free to take a GitHub URL, use tf2up, and see it
displayed with the changes in line in a browser window. This is really, really cool. Strongly suggest taking a look. So our timeline
for TensorFlow 2.0. Right now we're in alpha. We expect to launch an
RC release very soon, and we should have
an official release before the end of the year. You can also track all of this
progress publicly on GitHub. We've tried to increase
the transparency, and also the collaboration that
we see from the community as much as we possibly can,
because TensorFlow 2.0 really is all about community. And if you have questions
about any of the projects or about any of the issues
that you care about, we would love to hear them. And we would love to
prioritize them appropriately. And now to talk a little bit
about under-the-hood activities for TensorFlow. JOSH GORDAN: Thanks, Paige. Due to talking fast, we might
now actually have some time. So I will ask--
this is very basic, but I think a question
that a lot of people ask is, what exactly
is TensorFlow? And if I asked you that, what
I would have said when I first heard about it is,
like, right, that's an open source machine
learning library. Great, but what is it really? What does the code look like? How is it implemented, and what
problems is it actually solving that we care about? And rather than give you
the next slide, which is a lot of text and
the answer, the way to start thinking
about this question is actually with Python. So if you think about scientific
computing in general-- and this is the whole field,
not just machine learning. Let's say you're doing
weather forecasting, and as part of
weather forecasting, you need to multiply a
whole bunch of matrices. Probably you're writing
your code in Python. But if you think about it, how
much slower is Python than C for multiplying matrices? Ballpark? And like a horribly
non-reproducible rough benchmark would be Python's
about 100 times slower than C. And that's the difference
between six seconds and 10 minutes, which is also the
difference between running on a treadmill or
having a drink, sitting in an airplane
flying to California. So Python is horribly slow,
and yet it's extremely popular for performance-intensive tasks. One of the huge
reasons why is NumPy. And what NumPy is, it's a matrix
multiplier implemented in C that you can call from Python. And this gives you
this combination of Python ease of use
but C performance. And most deep learning libraries
have the same inspiration. So TensorFlow is a C++ back end. But usually-- not
always, but usually, we write our code in Python. On top of NumPy, what TensorFlow
and other deep learning libraries add is the
ability to run on GPUs. So in addition to being in C,
you can multiply your matrices on GPUs. And if you take a
deep learning class you'll end up learning that
the forward and backward paths in neural networks are both
matrix multiplications. So we care about this a lot. All deep learning
libraries, they add automatic differentiation. So you get the
gradient so you know how to update the
variables of your model. And TensorFlow adds
something special. And there's a lot of text on
the slide in details, whatever, you can look at later. But when you write a program
in TensorFlow in Python, one thing we care
a lot about-- and I know there are a lot of
mobile developers here-- is we want to run it on devices
that don't have a Python interpreter. Examples of that would be
a web browser or a phone. And so what TensorFlow does
is your TensorFlow program is compiled to what
we call a graph. And it's a data flow graph. The word TensorFlow, tensor
is a fancy word for an array. Scalar is a tensor, an array's
a tensor, matrix is a tensor, a cube of numbers is a tensor,
it's an n-dimensional array. Flow means data flow graph. Your TensorFlow
program gets compiled behind the scenes
in TensorFlow 2-- you don't need to know
the details of this unless you're
interested-- into a graph. And that graph can be
run on different devices. And what's interesting
is it can be accelerated. So just so you
know, because I've been talking about NumPy
a lot, TensorFlow 2, if you feel like it, works
basically the same way as NumPy. The syntax is
slightly different, but the ideas are the same. So here I'm creating some data,
a tensor instead of a NumPy array, and I'm multiplying
it by itself or whatever, and it just works
like regular NumPy. I should mention that instead of
being NumPy and [? D ?] arrays, TensorFlow tensors are
different data type, but they all have this
really nice NumPy method. So for any reason you're
tired of TensorFlow, you just call a
NumPy and you're back in NumPy land, which is awesome. But TensorFlow can do something
really special and cool, thanks to a really amazing
group of compiler engineers that are working on it. So here's just some random
code I wrote in TensorFlow, and I'm taking some layer and
I'm calling it on some data. And I have this horrible
non-reproducible benchmark that I ran at the bottom
there using timeit, and staring into the sun, I
think this took about 3/100 of a second to run. And in TensorFlow 2.0, we can
accelerate this code further. And the way this
works is there's only one line of code change. So if you look closely here,
I've just added an annotation and I've added a
second benchmark. So if we go back one
slide, that's before. Whoops. Let's go-- perfect. So here's before
and here's after. And with just that
annotation, this is running something like
eight or nine times faster. And your mileage will vary
depending on the type of code that you accelerate. You won't always get
a performance benefit, but the only non-standard
Python syntax that you need to be aware of
optionally in TensorFlow 2 is just the sanitation. So you don't need
to worry about-- if you learned
TensorFlow 1, this is really valuable
knowledge, because there's so many papers with really awesome
implementations in TensorFlow 1. But the only
nonstand-- you don't need to worry anymore about like
sessions, place holders, feed dictionaries, that's
just not necessary. This is the only thing. Otherwise it's regular Python. And the way this works
behind the scenes-- I'm showing you
this just for fun. You can totally ignore it. I personally never look
at this because I'm not a compilers person. But it works with
something called AutoGraph. And AutoGraph is a
source-to-source Python transformer. And basically it
generates a version of this code which can be
accelerated to a greater extent by the back end. And we can actually
see what's generated. We can print out AutoGraph. And basically what
we're looking at is more or less assembly
code for that function we just wrote. But in TensorFlow 2, you get
this for free, which I think is a really awesome thing. And it's super cool. That said, the fact that we need
a compilers engineer to write something as powerful and
amazing as this points us-- I mean, you might
want to think-- is we're really starting to
run into the wall with Python. And Python is by far my
favorite language of all time. It's going to be probably the
standard for machine learning for many years to come. But there's a huge amount
of value in investigating compiled languages like Swift. And if you're a Swift
developer, there's an excellent implementation
of TensorFlow in Swift, which you can learn and all the
same concepts that you learned about in Python
will apply in Swift. Likewise, TensorFlow
JS is incredible. If you're a
JavaScript developer, there's no need to learn Python. You can just go
directly in JavaScript. And this is something really
special about TensorFlow. OK, due to, as I said earlier,
talking way too fast because I thought this talk was
longer than it was, here is the best way to
learn about TensorFlow 2. Right now all of our
tutorials are hosted on tensorflow.org/alpha. If you're new to TensorFlow
and you're a Python developer, you should ignore everything
else on the website. Just go to Alpha and
this will give you the latest stuff for 2. Everything else is outdated
and totally not necessary. Because this is complicated,
I put some keywords there. And if you see any of those
keywords in a book or a video or tutorial, just skip it. It's outdated, and we should
be finished with TensorFlow 2 in a few months, at which point
we will roll the entire website over to the new stuff. So that's absolutely key. Speaking of things that
are coming up soon, here are two awesome books. And I want to mention, just so
you know what you're getting, I'm not going
pronounce the names. I mentioned earlier I'm
terrible at languages. I can't pronounce French at all. They're both brilliant,
awesome people. The first is the second
edition of the Hands-On Machine Learning book. The first uses
TensorFlow 1, so I'd recommend skipping that one. This one is coming
out in a few months. The GitHub repo
is available now. It's excellent, so you
can start with this. The second book, I'll
pronounce this one. It's Francois Chollet. Hope I got it right. And then Aurelien Geron,
which I probably got wrong. The second book doesn't
use TensorFlow at all. It uses the Keras
reference implementation. Every concept you learn in that
book will work in TensorFlow 2 just by changing the import. So you don't waste
your time at all. It's an outstanding reference. And yeah. We'll be around after if
we can answer any questions or help with anything. So thank you very much. [MUSIC PLAYING]