Learning Deep Learning: Theory and Practice of Neural Networks, Computer Vision, Natural Language Processing, and Transformers Using TensorFlow, 1st edition

Published by Addison-Wesley Professional (July 19, 2021) © 2022

  • Magnus Ekman
Products list

Access details

  • Digital eBook
  • Instant access
  • Available online, offline and via apps
  • Accessible through the VitalSource Bookshelf

Features

  • Code examples
  • Make highlights and notes
  • Listen as the Bookshelf reads to you
Products list

Details

  • A print copy
  • Free shipping

Features

  • Code examples
  • Clear explanations
  • Microsoft's ML.NET

This product is expected to ship within 10-12 business days for New Zealand customers

Title overview

Deep learning (DL) is a key component of today's exciting advances in machine learning and artificial intelligence. Learning Deep Learning is a complete guide to DL. Illuminating both the core concepts and the hands-on programming techniques needed to succeed, this book is ideal for developers, data scientists, analysts, and others--including those with no prior machine learning or statistics experience.

Samples

Preview sample pages from Learning Deep Learning: Theory and Practice of Neural Networks, Computer Vision, Natural Language Processing, and Transformers Using TensorFlow >

After introducing the essential building blocks of deep neural networks, such as artificial neurons and fully connected, convolutional, and recurrent layers, Magnus Ekman shows how to use them to build advanced architectures, including the Transformer. He describes how these concepts are used to build modern networks for computer vision and natural language processing (NLP), including Mask R-CNN, GPT, and BERT. And he explains how a natural language translator and a system generating natural language descriptions of images.

Throughout, Ekman provides concise, well-annotated code examples using TensorFlow with Keras. Corresponding PyTorch examples are provided online, and the book thereby covers the two dominating Python libraries for DL used in industry and academia. He concludes with an introduction to neural architecture search (NAS), exploring important ethical issues and providing resources for further learning.

  • Explore and master core concepts: perceptrons, gradient-based learning, sigmoid neurons, and back propagation
  • See how DL frameworks make it easier to develop more complicated and useful neural networks
  • Discover how convolutional neural networks (CNNs) revolutionise image classification and analysis
  • Apply recurrent neural networks (RNNs) and long short-term memory (LSTM) to text and other variable-length sequences
  • Master NLP with sequence-to-sequence networks and the Transformer architecture
  • Build applications for natural language translation and image captioning

Table of contents

  • Chapter 1: The Rosenblatt Perceptron
  • Chapter 2: Gradient-Based Learning
  • Chapter 3: Sigmoid Neurons and Backpropagation
  • Chapter 4: Fully Connected Networks Applied to Multiclass Classification
  • Chapter 5: Toward DL: Frameworks and Network Tweaks
  • Chapter 6: Fully Connected Networks Applied to Regression
  • Chapter 7: Convolutional Neural Networks Applied to Image Classification
  • Chapter 8: Deeper CNNs and Pretrained Models
  • Chapter 9: Predicting Time Sequences with Recurrent Neural Networks
  • Chapter 10: Long Short-Term Memory
  • Chapter 11: Text Autocompletion with LSTM and Beam Search
  • Chapter 12: Neural Language Models and Word Embeddings
  • Chapter 13: Word Embeddings from word2vec and GloVe
  • Chapter 14: Sequence-to-Sequence Networks and Natural Language Translation
  • Chapter 15: Attention and the Transformer
  • Chapter 16: One-to-Many Network for Image Captioning
  • Chapter 17: Medley of Additional Topics
  • Chapter 18: Summary and Next Steps

Need help?Get in touch