Digital Signal Processing: Principles, Algorithms and Applications, 5th edition

  • John G. Proakis, 
  • Dimitris G Manolakis

Your access includes:

  • Search, highlight, notes, and more
  • Easily create flashcards
  • Use the app for access anywhere
  • 14-day refund guarantee

$10.99per month

Minimum 4-month term, pay monthly or pay $43.96 upfront

Learn more, spend less

  • Watch and learn

    Videos & animations bring concepts to life

  • Listen on the go

    Learn how you like with full eTextbook audio

  • Find it fast

    Quickly navigate your eTextbook with search

  • Stay organized

    Access all your eTextbooks in one place

  • Easily continue access

    Keep learning with auto-renew

Overview

Digital Signal Processing offers balanced coverage of digital signal processing theory and practical applications. It's your guide to the fundamental concepts and techniques of discrete-time signals, systems, and modern digital processing. Related algorithms and applications are covered, as are both time-domain and frequency-domain methods for the analysis of linear, discrete-time systems. Numerous examples and over 500 problems emphasize software implementation of digital signal processing algorithms.

The 5th Edition includes a new chapter on multirate digital filter banks and wavelets. Several new topics have been added to existing chapters, including short-time Fourier Transform, the sparse FFT algorithm, and reverberation filters.

Published by Pearson (July 23rd 2021) - Copyright © 2022

ISBN-13: 9780137348657

Subject: Electrical Engineering

Category: Digital Signals & Systems

Overview

  1. Introduction
    • 1.1 Signals, Systems, and Signal Processing
      • 1.1.1 Basic Elements of a Digital Signal Processing System
      • 1.1.2 Advantages of Digital over Analog Signal Processing
    • 1.2 Classification of Signals
      • 1.2.1 Multichannel and Multidimensional Signals
      • 1.2.2 Continuous-Time Versus Discrete-Time Signals
      • 1.2.3 Continuous-Valued Versus Discrete-Valued Signals
      • 1.2.4 Deterministic Versus Random Signals
    • 1.3 Summary
    • Problems
  2. Discrete-Time Signals and Systems
    • 2.1 Discrete-Time Signals
      • 2.1.1 Some Elementary Discrete-Time Signals
      • 2.1.2 Classification of Discrete-Time Signals
      • 2.1.3 Simple Manipulations of Discrete-Time Signals
    • 2.2 Discrete-Time Systems
      • 2.2.1 Input-Output Description of Systems
      • 2.2.2 Block Diagram Representation of Discrete-Time Systems
      • 2.2.3 Classification of Discrete-Time Systems
      • 2.2.4 Interconnection of Discrete-Time Systems
    • 2.3 Analysis of Discrete-Time Linear Time-Invariant Systems
      • 2.3.1 Techniques for the Analysis of Linear Systems
      • 2.3.2 Resolution of a Discrete-Time Signal into Impulses
      • 2.3.3 Response of LTI Systems to Arbitrary Inputs: The Convolution Sum
      • 2.3.4 Properties of Convolution and the Interconnection of LTI Systems
      • 2.3.5 Causal Linear Time-Invariant Systems
      • 2.3.6 Stability of Linear Time-Invariant Systems
      • 2.3.7 Systems with Finite-Duration and Infinite-Duration Impulse Response
    • 2.4 Discrete-Time Systems Described by Difference Equations
      • 2.4.1 Recursive and Nonrecursive Discrete-Time Systems
      • 2.4.2 Linear Time-Invariant Systems Characterized by Constant-Coefficient Difference Equations
      • 2.4.3 Application of LTI Systems for Signal Smoothing
    • 2.5 Implementation of Discrete-Time Systems
      • 2.5.1 Structures for the Realization of Linear Time-Invariant Systems
      • 2.5.2 Recursive and Nonrecursive Realizations of FIR Systems
    • 2.6 Correlation of Discrete-Time Signals
      • 2.6.1 Crosscorrelation and Autocorrelation Sequences
      • 2.6.2 Properties of the Autocorrelation and Crosscorrelation Sequences
      • 2.6.3 Correlation of Periodic Sequences
      • 2.6.4 Input-Output Correlation Sequences
    • 2.7 Summary
    • Problems
    • Computer Problems
  3. The z-Transform and Its Application to the Analysis of LTI Systems
    • 3.1 The z-Transform
      • 3.1.1 The Direct z-Transform
      • 3.1.2 The Inverse z-Transform
    • 3.2 Properties of the z-Transform
    • 3.3 Rational z-Transforms
      • 3.3.1 Poles and Zeros
      • 3.3.2 Pole Location and Time-Domain Behavior for Causal Signals
      • 3.3.3 The System Function of a Linear Time-Invariant System
    • 3.4 Inversion of the z-Transform
      • 3.4.1 The Inverse z-Transform by Contour Integration
      • 3.4.2 The Inverse z-Transform by Power Series Expansion
      • 3.4.3 The Inverse z-Transform by Partial-Fraction Expansion
      • 3.4.4 Decomposition of Rational z-Transforms
    • 3.5 Analysis of Linear Time-Invariant Systems in the z-Domain
      • 3.5.1 Response of Systems with Rational System Functions
      • 3.5.2 Transient and Steady-State Responses
      • 3.5.3 Causality and Stability
      • 3.5.4 Pole—Zero Cancellations
      • 3.5.5 Multiple-Order Poles and Stability
      • 3.5.6 Stability of Second-Order Systems
    • 3.6 The One-sided z-Transform
      • 3.6.1 Definition and Properties
      • 3.6.2 Solution of Difference Equations
      • 3.6.3 Response of Pole—Zero Systems with Nonzero Initial Conditions
    • 3.7 Summary
    • Problems
    • Computer Problems
  4. Frequency Analysis of Signals
    • 4.1 The Concept of Frequency in Continuous-Time and Discrete-Time Signals
      • 4.1.1 Continuous-Time Sinusoidal Signals
      • 4.1.2 Discrete-Time Sinusoidal Signals
      • 4.1.3 Harmonically Related Complex Exponentials
      • 4.1.4 Sampling of Analog Signals
      • 4.1.5 The Sampling Theorem
    • 4.2 Frequency Analysis of Continuous-Time Signals
      • 4.2.1 The Fourier Series for Continuous-Time Periodic Signals
      • 4.2.2 Power Density Spectrum of Periodic Signals
      • 4.2.3 The Fourier Transform for Continuous-Time Aperiodic Signals
      • 4.2.4 Energy Density Spectrum of Aperiodic Signals
    • 4.3 Frequency Analysis of Discrete-Time Signals
      • 4.3.1 The Fourier Series for Discrete-Time Periodic Signals
      • 4.3.2 Power Density Spectrum of Periodic Signals
      • 4.3.3 The Fourier Transform of Discrete-Time Aperiodic Signals
      • 4.3.4 Convergence of the Fourier Transform
      • 4.3.5 Energy Density Spectrum of Aperiodic Signals
      • 4.3.6 Relationship of the Fourier Transform to the z-Transform
      • 4.3.7 The Cepstrum
      • 4.3.8 The Fourier Transform of Signals with Poles on the Unit Circle
      • 4.3.9 Frequency-Domain Classification of Signals: The Concept of Bandwidth
      • 4.3.10 The Frequency Ranges of Some Natural Signals
    • 4.4 Frequency-Domain and Time-Domain Signal Properties
    • 4.5 Properties of the Fourier Transform for Discrete-Time Signals
      • 4.5.1 Symmetry Properties of the Fourier Transform
      • 4.5.2 Fourier Transform Theorems and Properties
    • 4.6 Summary
    • Problems
    • Computer Problems
  5. Frequency-Domain Analysis of LTI Systems
    • 5.1 Frequency-Domain Characteristics of Linear Time-Invariant Systems
      • 5.1.1 Response to Complex Exponential and Sinusoidal Signals: The Frequency Response Function
      • 5.1.2 Steady-State and Transient Response to Sinusoidal Input Signals
      • 5.1.3 Steady-State Response to Periodic Input Signals
      • 5.1.4 Steady-State Response to Aperiodic Input Signals
    • 5.2 Frequency Response of LTI Systems
      • 5.2.1 Frequency Response of a System with a Rational System Function
      • 5.2.2 Computation of the Frequency Response Function
    • 5.3 Correlation Functions and Spectra at the Output of LTI Systems
    • 5.4 Linear Time-Invariant Systems as Frequency-Selective Filters
      • 5.4.1 Ideal Filter Characteristics
      • 5.4.2 Lowpass, Highpass, and Bandpass Filters
      • 5.4.3 Digital Resonators
      • 5.4.4 Notch Filters
      • 5.4.5 Comb Filters
      • 5.4.6 Reverberation Filters
      • 5.4.7 All-Pass Filters
      • 5.4.8 Digital Sinusoidal Oscillators
    • 5.5 Inverse Systems and Deconvolution
      • 5.5.1 Invertibility of Linear Time-Invariant Systems
      • 5.5.2 Minimum-Phase, Maximum-Phase, and Mixed-Phase Systems
      • 5.5.3 System Identification and Deconvolution
      • 5.5.4 Homomorphic Deconvolution
    • 5.6 Summary
    • Problems
    • Computer Problems
  6. Sampling and Reconstruction of Signals
    • 6.1 Ideal Sampling and Reconstruction of Continuous-Time Signals
    • 6.2 Discrete-Time Processing of Continuous-Time Signals
    • 6.3 Sampling and Reconstruction of Continuous-Time Bandpass Signals
      • 6.3.1 Uniform or First-Order Sampling
      • 6.3.2 Interleaved or Nonuniform Second-Order Sampling
      • 6.3.3 Bandpass Signal Representations
      • 6.3.4 Sampling Using Bandpass Signal Representations
    • 6.4 Sampling of Discrete-Time Signals
      • 6.4.1 Sampling and Interpolation of Discrete-Time Signals
      • 6.4.2 Representation and Sampling of Bandpass Discrete-Time Signals
    • 6.5 Analog-to-Digital and Digital-to-Analog Converters
      • 6.5.1 Analog-to-Digital Converters
      • 6.5.2 Quantization and Coding
      • 6.5.3 Analysis of Quantization Errors
      • 6.5.4 Digital-to-Analog Converters
    • 6.6 Oversampling A/D and D/A Converters
      • 6.6.1 Oversampling A/D Converters
      • 6.6.2 Oversampling D/A Converters
    • 6.7 Summary
    • Problems
    • Computer Problems
  7. The Discrete Fourier Transform: Its Properties and Applications
    • 7.1 Frequency-Domain Sampling: The Discrete Fourier Transform
      • 7.1.1 Frequency-Domain Sampling and Reconstruction of Discrete-Time Signals
      • 7.1.2 The Discrete Fourier Transform (DFT)
      • 7.1.3 The DFT as a Linear Transformation
      • 7.1.4 Relationship of the DFT to Other Transforms
    • 7.2 Properties of the DFT
      • 7.2.1 Periodicity, Linearity, and Symmetry Properties
      • 7.2.2 Multiplication of Two DFTs and Circular Convolution
      • 7.2.3 Additional DFT Properties
    • 7.3 Linear Filtering Methods Based on the DFT
      • 7.3.1 Use of the DFT in Linear Filtering
      • 7.3.2 Filtering of Long Data Sequences
    • 7.4 Frequency Analysis of Signals Using the DFT
    • 7.5 The Short-Time Fourier Transform
    • 7.6 The Discrete Cosine Transform
      • 7.6.1 Forward DCT
      • 7.6.2 Inverse DCT
      • 7.6.3 DCT as an Orthogonal Transform
    • 7.7 Summary
    • Problems
    • Computer Problems
  8. Efficient Computation of the DFT: Fast Fourier Transform Algorithms
    • 8.1 Efficient Computation of the DFT: FFT Algorithms
      • 8.1.1 Direct Computation of the DFT
      • 8.1.2 Divide-and-Conquer Approach to Computation of the DFT
      • 8.1.3 Radix-2 FFT Algorithms
      • 8.1.4 Radix-4 FFT Algorithms
      • 8.1.5 Split-Radix FFT Algorithms
      • 8.1.6 Implementation of FFT Algorithms
      • 8.1.7 Sparse FFT Algorithm
    • 8.2 Applications of FFT Algorithms
      • 8.2.1 Efficient Computation of the DFT of Two Real Sequences
      • 8.2.2 Efficient Computation of the DFT of a 2N-Point Real Sequence
      • 8.2.3 Use of the FFT Algorithm in Linear Filtering and Correlation
    • 8.3 A Linear Filtering Approach to Computation of the DFT
      • 8.3.1 The Goertzel Algorithm
      • 8.3.2 The Chirp-z Transform Algorithm
    • 8.4 Quantization Effects in the Computation of the DFT
      • 8.4.1 Quantization Errors in the Direct Computation of the DFT
      • 8.4.2 Quantization Errors in FFT Algorithms
    • 8.5 Summary
    • Problems
    • Computer Problems
  9. Implementation of Discrete-Time Systems
    • 9.1 Structures for the Realization of Discrete-Time Systems
    • 9.2 Structures for FIR Systems
      • 9.2.1 Direct-Form Structure
      • 9.2.2 Cascade-Form Structures
      • 9.2.3 Frequency-Sampling Structures
      • 9.2.4 Lattice Structure
    • 9.3 Structures for IIR Systems
      • 9.3.1 Direct-Form Structures
      • 9.3.2 Signal Flow Graphs and Transposed Structures
      • 9.3.3 Cascade-Form Structures
      • 9.3.4 Parallel-Form Structures
      • 9.3.5 Lattice and Lattice-Ladder Structures for IIR Systems
    • 9.4 Representation of Numbers
      • 9.4.1 Fixed-Point Representation of Numbers
      • 9.4.2 Binary Floating-Point Representation of Numbers
      • 9.4.3 Errors Resulting from Rounding and Truncation
    • 9.5 Quantization of Filter Coefficients
      • 9.5.1 Analysis of Sensitivity to Quantization of Filter Coefficients
      • 9.5.2 Quantization of Coefficients in FIR Filters
    • 9.6 Round-Off Effects in Digital Filters
      • 9.6.1 Limit-Cycle Oscillations in Recursive Systems
      • 9.6.2 Scaling to Prevent Overflow
      • 9.6.3 Statistical Characterization of Quantization Effects in Fixed-Point Realizations of Digital Filters
    • 9.7 Summary
    • Problems
    • Computer Problems
  10. Design of Digital Filters
    • 10.1 General Considerations
      • 10.1.1 Causality and Its Implications
      • 10.1.2 Characteristics of Practical Frequency-Selective Filters
    • 10.2 Design of FIR Filters
      • 10.2.1 Symmetric and Antisymmetric FIR Filters
      • 10.2.2 Design of Linear-Phase FIR Filters Using Windows
      • 10.2.3 Design of Linear-Phase FIR Filters by the Frequency-Sampling Method
      • 10.2.4 Design of Optimum Equiripple Linear-Phase FIR Filters
      • 10.2.5 Design of FIR Differentiators
      • 10.2.6 Design of Hilbert Transformers
      • 10.2.7 Comparison of Design Methods for Linear-Phase FIR Filters
    • 10.3 Design of IIR Filters From Analog Filters
      • 10.3.1 IIR Filter Design by Approximation of Derivatives
      • 10.3.2 IIR Filter Design by Impulse Invariance
      • 10.3.3 IIR Filter Design by the Bilinear Transformation
      • 10.3.4 Characteristics of Commonly Used Analog Filters
      • 10.3.5 Some Examples of Digital Filter Designs Based on the Bilinear Transformation
    • 10.4 Frequency Transformations
      • 10.4.1 Frequency Transformations in the Analog Domain
      • 10.4.2 Frequency Transformations in the Digital Domain
    • 10.5 Summary
    • Problems
    • Computer Problems
  11. Multirate Digital Signal Processing
    • 11.1 Introduction
    • 11.2 Decimation by a Factor D
    • 11.3 Interpolation by a Factor I
    • 11.4 Sampling Rate Conversion by a Rational Factor I /D
    • 11.5 Implementation of Sampling Rate Conversion
      • 11.5.1 Polyphase Filter Structures
      • 11.5.2 Interchange of Filters and Downsamplers/Upsamplers
      • 11.5.3 Sampling Rate Conversion with Cascaded Integrator Comb Filters
      • 11.5.4 Polyphase Structures for Decimation and Interpolation Filters
      • 11.5.5 Structures for Rational Sampling Rate Conversion
    • 11.6 Multistage Implementation of Sampling Rate Conversion
    • 11.7 Sampling Rate Conversion of Bandpass Signals
    • 11.8 Sampling Rate Conversion by an Arbitrary Factor
      • 11.8.1 Arbitrary Resampling with Polyphase Interpolators
      • 11.8.2 Arbitrary Resampling with Farrow Filter Structures
    • 11.9 Applications of Multirate Signal Processing
      • 11.9.1 Design of Phase Shifters
      • 11.9.2 Interfacing of Digital Systems with Different Sampling Rates
      • 11.9.3 Implementation of Narrowband Lowpass Filters
      • 11.9.4 Subband Coding of Speech Signals
    • 11.10 Summary
    • Problems
    • Computer Problems
  12. Multirate Digital Filter Banks and Wavelets
    • 12.1 Multirate Digital Filter Banks
      • 12.1.1 DFT Filter Banks
      • 12.1.2 Polyphase Structure of the Uniform DFT Filter Bank
      • 12.1.3 An Alternative Structure of the Uniform DFT Filter Bank
    • 12.2 Two-Channel Quadrature Mirror Filter Bank
      • 12.2.1 Elimination of Aliasing
      • 12.2.2 Polyphase Structure of the QMF Bank
      • 12.2.3 Condition for Perfect Reconstruction
      • 12.2.4 Linear Phase FIR QMF Bank
      • 12.2.5 IIR QMF Bank
      • 12.2.6 Perfect Reconstruction in Two-Channel FIR QMF Bank
      • 12.2.7 Two-Channel Paraunitary QMF Bank
      • 12.2.8 Orthogonal and Biorthogonal Two-channel FIR Filter Banks
      • 12.2.9 Two-Channel QMF Banks in Subband Coding
    • 12.3 M-Channel Filter Banks
      • 12.3.1 Polyphase Structure for the M-Channel Filter Bank
      • 12.3.2 M-Channel Paraunitary Filter Banks
    • 12.4 Wavelets and Wavelet Transforms
      • 12.4.1 Ideal Bandpass Wavelet Decomposition
      • 12.4.2 Signal Spaces and Wavelets
      • 12.4.3 Multiresolution Analysis and Wavelets
      • 12.4.4 The Discrete Wavelet Transform
    • 12.5 From Wavelets to Filter Banks
      • 12.5.1 Dilation Equations
      • 12.5.2 Orthogonality Conditions
      • 12.5.3 Implications of Orthogonality and Dilation Equations
    • 12.6 From Filter Banks to Wavelets
    • 12.7 Regular Filters and Wavelets
    • 12.8 Summary
    • Problems
    • Computer Problems
  13. Linear Prediction and Optimum Linear Filters
    • 13.1 Random Signals, Correlation Functions, and Power Spectra
      • 13.1.1 Random Processes
      • 13.1.2 Stationary Random Processes
      • 13.1.3 Statistical (Ensemble) Averages
      • 13.1.4 Statistical Averages for Joint Random Processes
      • 13.1.5 Power Density Spectrum
      • 13.1.6 Discrete-Time Random Signals
      • 13.1.7 Time Averages for a Discrete-Time Random Process
      • 13.1.8 Mean-Ergodic Process
      • 13.1.9 Correlation-Ergodic Processes
      • 13.1.10 Correlation Functions and Power Spectra for Random Input Signals to LTI Systems
    • 13.2 Innovations Representation of a Stationary Random Process
      • 13.2.1 Rational Power Spectra
      • 13.2.2 Relationships Between the Filter Parameters and the Autocorrelation Sequence
    • 13.3 Forward and Backward Linear Prediction
      • 13.3.1 Forward Linear Prediction
      • 13.3.2 Backward Linear Prediction
      • 13.3.3 The Optimum Reflection Coefficients for the Lattice Forward and Backward Predictors
      • 13.3.4 Relationship of an AR Process to Linear Prediction
    • 13.4 Solution of the Normal Equations
      • 13.4.1 The Levinson—Durbin Algorithm
    • 13.5 Properties of the Linear Prediction-Error Filters
    • 13.6 AR Lattice and ARMA Lattice-Ladder Filters
      • 13.6.1 AR Lattice Structure
      • 13.6.2 ARMA Processes and Lattice-Ladder Filters
    • 13.7 Wiener Filters for Filtering and Prediction
      • 13.7.1 FIR Wiener Filter
      • 13.7.2 Orthogonality Principle in Linear Mean-Square Estimation
      • 13.7.3 IIR Wiener Filter
      • 13.7.4 Noncausal Wiener Filter
    • 13.8 Summary
    • Problems
    • Computer Problems
  14. Adaptive Filters
    • 14.1 Applications of Adaptive Filters
      • 14.1.1 System Identification or System Modeling
      • 14.1.2 Adaptive Channel Equalization
      • 14.1.3 Suppression of Narrowband Interference in a Wideband Signal
      • 14.1.4 Adaptive Line Enhancer
      • 14.1.5 Adaptive Noise Cancelling
      • 14.1.6 Adaptive Arrays
    • 14.2 Adaptive Direct-Form FIR Filters - The LMS Algorithm
      • 14.2.1 Minimum Mean-Square-Error Criterion
      • 14.2.2 The LMS Algorithm
      • 14.2.3 Related Stochastic Gradient Algorithms
      • 14.2.4 Properties of the LMS Algorithm
    • 14.3 Adaptive Direct-Form Filters - RLS Algorithms
      • 14.3.1 RLS Algorithm
      • 14.3.2 The LDU Factorization and Square-Root Algorithms
      • 14.3.3 Fast RLS Algorithms
      • 14.3.4 Properties of the Direct-Form RLS Algorithms
    • 14.4 Adaptive Lattice-Ladder Filters
      • 14.4.1 Recursive Least-Squares Lattice-Ladder Algorithms
      • 14.4.2 Other Lattice Algorithms
      • 14.4.3 Properties of Lattice-Ladder Algorithms
    • 14.5 Stability and Robustness of Adaptive Filter Algorithms
    • 14.6 Summary
    • Problems
    • Computer Problems
  15. Power Spectrum Estimation
    • 15.1 Estimation of Spectra from Finite-Duration Observations of Signals
      • 15.1.1 Computation of the Energy Density Spectrum
      • 15.1.2 Estimation of the Autocorrelation and Power Spectrum of Random Signals: The Periodogram
      • 15.1.3 The Use of the DFT in Power Spectrum Estimation
    • 15.2 Nonparametric Methods for Power Spectrum Estimation
      • 15.2.1 The Bartlett Method: Averaging Periodograms
      • 15.2.2 The Welch Method: Averaging Modified Periodograms
      • 15.2.3 The Blackman and Tukey Method: Smoothing the Periodogram
      • 15.2.4 Performance Characteristics of Nonparametric Power Spectrum Estimators
      • 15.2.5 Computational Requirements of Nonparametric Power Spectrum Estimates
    • 15.3 Parametric Methods for Power Spectrum Estimation
      • 15.3.1 Relationships Between the Autocorrelation and the Model Parameters
      • 15.3.2 The Yule—Walker Method for the AR Model Parameters
      • 15.3.3 The Burg Method for the AR Model Parameters
      • 15.3.4 Unconstrained Least-Squares Method for the AR Model Parameters
      • 15.3.5 Sequential Estimation Methods for the AR Model Parameters
      • 15.3.6 Selection of AR Model Order
      • 15.3.7 MA Model for Power Spectrum Estimation
      • 15.3.8 ARMA Model for Power Spectrum Estimation
      • 15.3.9 Some Experimental Results
    • 15.4 ARMA Model Parameter Estimation
    • 15.5 Filter Bank Methods
      • 15.5.1 Filter Bank Realization of the Periodogram
      • 15.5.2 Minimum Variance Spectral Estimates
    • 15.6 Eigenanalysis Algorithms for Spectrum Estimation
      • 15.6.1 Pisarenko Harmonic Decomposition Method
      • 15.6.2 Eigen-decomposition of the Autocorrelation Matrix for Sinusoids in White Noise
      • 15.6.3 MUSIC Algorithm
      • 15.6.4 ESPRIT Algorithm
      • 15.6.5 Order Selection Criteria
      • 15.6.6 Experimental Results
    • 15.7 Summary
    • Problems
    • Computer Problems
      1. Random Number Generators
      2. Tables of Transition Coefficients for the Design of Linear-Phase FIR Filters

References and Bibliography

Answers to Selected Problems

Index

Your questions answered

Pearson+ is your one-stop shop, with eTextbooks and study videos designed to help students get better grades in college.

A Pearson eTextbook is an easy‑to‑use digital version of the book. You'll get upgraded study tools, including enhanced search, highlights and notes, flashcards and audio. Plus learn on the go with the Pearson+ app.

Your eTextbook subscription gives you access for 4 months. You can make a one‑time payment for the initial 4‑month term or pay monthly. If you opt for monthly payments, we will charge your payment method each month until your 4‑month term ends. You can turn on auto‑renew in My account at any time to continue your subscription before your 4‑month term ends.

When you purchase an eTextbook subscription, it will last 4 months. You can renew your subscription by selecting Extend subscription on the Manage subscription page in My account before your initial term ends.

If you extend your subscription, we'll automatically charge you every month. If you made a one‑time payment for your initial 4‑month term, you'll now pay monthly. To make sure your learning is uninterrupted, please check your card details.

To avoid the next payment charge, select Cancel subscription on the Manage subscription page in My account before the renewal date. You can subscribe again in the future by purchasing another eTextbook subscription.

Channels is a video platform with thousands of explanations, solutions and practice problems to help you do homework and prep for exams. Videos are personalized to your course, and tutors walk you through solutions. Plus, interactive AI‑powered summaries and a social community help you better understand lessons from class.

Channels is an additional tool to help you with your studies. This means you can use Channels even if your course uses a non‑Pearson textbook.

When you choose a Channels subscription, you're signing up for a 1‑month, 3‑month or 12‑month term and you make an upfront payment for your subscription. By default, these subscriptions auto‑renew at the frequency you select during checkout.

When you purchase a Channels subscription it will last 1 month, 3 months or 12 months, depending on the plan you chose. Your subscription will automatically renew at the end of your term unless you cancel it.

We use your credit card to renew your subscription automatically. To make sure your learning is uninterrupted, please check your card details.