Posted by TensorFlow Team
Google I/O 2022 was a major milestone in the evolution of AI and Machine Learning for developers. We’re really excited about the potential for developers using our technologies and Machine Learning to build intelligent solutions, and we believe that 2022 is the year when AI and ML become part of every developer’s toolbox.
At the I/O keynotes we showed our fully open source ecosystem that takes you from end to end with Machine Learning. There are developer tools for managing data, training your models, and deploying them to a variety of surfaces from global scale cloud all the way down to tiny microcontrollers…and of course ways to monitor and maintain your systems with MLOps. All of this comes with a common set of accelerated hardware for training and inference, along with open source tooling for responsible AI end-to-end.
You can get a tour through this ecosystem in the Keynote “AI and Machine Learning updates for Developers”
Responsible AI review processes: From a developer’s point of view
We can all agree that responsible and ethical AI is important, but when you want to build responsibly, you need tooling. We could, and will, create a whole video series about these tools, but the great content to watch right now is the talk on the Responsible AI review process. Googlers who worked on projects like the Covid-19 public forecasts or the Celebrity Recognition APIs will take you step-by-step through their thought process and how the tools lined up to help them build more responsibly and thoughtfully. You’ll also learn about some of the new releases in Responsible AI tools, such as the Counterfactual Logit Pairing library.
Adding machine learning to your developer toolbox
If you’re just getting started on your journey and you want ML to be a part of your toolbox, you probably have a million questions. Follow a developer’s journey through the best offerings, from a turnkey API that can solve basic problems fast, to custom models that can be tuned and deployed.
TensorFlow.js: From prototype to production, what’s new in 2022?
If you’re a web developer there’s a whole bunch of new updates, from the announcement of a new set of courses that will take you from first principles through a deep dive of what’s possible to lots of new models available to web devs. These include a selfie depth estimation model that can be used for cool things like a 3D effect in your pictures without needing any kind of extra sensor. You’ll also see 3D pose estimation that allows you to run at a high FPS to get real time results, allowing you to do things like having a full animated character following your body motion. All in the browser!
Deploy a custom ML model to mobile
If you want to build better mobile apps with AI and Machine Learning, you probably need to understand the ins and outs of getting models to execute on Android or iOS devices, including shrinking them and optimizing them to be power friendly. Supercharge your model with new releases from the TensorFlow Lite team that let you quantize, debug, and accelerate your model on CPU or delegated GPUs, and a whole lot more.
Further on the edge with Coral Dev Board Micro
Speaking of acceleration, this year at I/O we introduced the Coral Dev Board Micro. This is a new microcontroller class device with an on-board Edge TPU that’s powerful enough to run multiple models in tandem. The Coral team has also updated their catalog of pre-trained models, now including over 40 models now available for you to use on embedded systems out of the box!
Tips and tricks for distributed large model training
On the other side of the spectrum, if you want to train large models, you’ll need to understand how to shard training and data across multiple processors or cores. We’ve released lots of new guidance and updates for model and data parallelism. You can learn all about them in this talk, including lessons learned from Google researchers in building language models.
Easier data preprocessing with Keras
Of course, not all data is big data, and if you’re not building giant models, you still need to be able to manage your data. Often this is where devs will write the most code for ML, so we want to highlight some ways of making this easier, in particular with Keras. Keras’s new preprocessing layers that not only make vectorization and augmentation much easier, but also allow for precomputation to make your training more efficient by reducing idle time. Learn about data preprocessing from the creator of Keras!
An introduction to MLOps with TFX
Finally, let’s not forget MLOps and TFX, the open source, end-to-end pipeline management tool. Check out the talk from Robert Crowe who will help you understand everything, from why you need MLOps to managing your process through managing change. You’ll see the component model in TFX, and get an introduction to the new TFX-Addons community that’s focussed on building new ones. Check it all out in this talk!!
I/O wasn’t just about new releases and talks! If you are inspired by any of what you saw, we also have workshops and learning paths you can dig into to learn in more detail.
Full playlist to all AI/ML talks and workshops.
That’s it for this roundup of AI and ML at Google I/O 2022. We hope you’ve enjoyed it, and we’d love to hear your feedback when you explore the content. Please drop by the TensorFlow Forum and let us know what you think!