神经网络与机器学习

出版时间:2009-3  出版社:机械工业  作者:(加)海金  页数:906  
Tag标签:无  

前言

In writing this third edition of a classic book, I have been guided by the same uuderly hag philosophy of the first edition of the book:Write an up wdate treatment of neural networks in a comprehensive, thorough, and read able manner.The new edition has been retitied Neural Networks and Learning Machines, in order toreflect two reahties: L The perceptron, the multilayer perceptroo, self organizing maps, and neuro dynamics, to name a few topics, have always been considered integral parts of neural networks, rooted in ideas inspired by the human brain.2. Kernel methods, exemplified by support vector machines and kernel principal components analysis, are rooted in statistical learning theory.Although, indeed, they share many fundamental concepts and applications, there aresome subtle differences between the operations of neural networks and learning ma chines. The underlying subject matter is therefore much richer when they are studiedtogether, under one umbrella, particulasiy so when ideas drawn from neural networks and machine learning are hybridized to perform improved learning tasks beyond the capability of either one operating on its own, and ideas inspired by the human brain lead to new perspectives wherever they are of particular importance.

内容概要

神经网络是计算智能和机器学习的重要分支,在诸多领域都取得了很大的成功。在众多神经网络著作中,影响最为广泛的是Simon Haykin的《神经网络原理》(第4版更名为《神经网络与机器学习》)。在本书中,作者结合近年来神经网络和机器学习的最新进展,从理论和实际应用出发,全面。系统地介绍了神经网络的基本模型、方法和技术,并将神经网络和机器学习有机地结合在一起。  本书不但注重对数学分析方法和理论的探讨,而且也非常关注神经网络在模式识别、信号处理以及控制系统等实际工程问题中的应用。本书的可读性非常强,作者举重若轻地对神经网络的基本模型和主要学习理论进行了深入探讨和分析,通过大量的试验报告、例题和习题来帮助读者更好地学习神经网络。  本版在前一版的基础上进行了广泛修订,提供了神经网络和机器学习这两个越来越重要的学科的最新分析。  本书特色  基于随机梯度下降的在线学习算法;小规模和大规模学习问题。  核方法,包括支持向量机和表达定理。  信息论学习模型,包括连接、独立分量分析(ICA),一致独立分量分析和信息瓶颈。  随机动态规划,包括逼近和神经动态规划。  逐次状态估计算法,包括Kalman和粒子滤波器。  利用逐次状态估计算法训练递归神经网络。  富有洞察力的面向计算机的试验。

作者简介

Simon Haykin,于1953年获得英国伯明翰大学博士学位,目前为加拿大McMaster大学电子与计算机工程系教授、通信研究实验室主任。他是国际电子电气工程界的著名学者,曾获得IEEE McNaughton金奖。他是加拿大皇家学会院士、IEEE会士,在神经网络、通信、自适应滤波器等领域成果颇

书籍目录

Preface Acknowledgements Abbreviations and Symbols GLOSSARYIntroduction  1 Whatis aNeuralNetwork? 2 The Human Brain 3 Models of a Neuron 4 Neural Networks Viewed As Dirccted Graphs 5 Feedback   6 Network Architecturns   7 Knowledge Representation   8 Learning Processes   9 Learninglbks  10 Concluding Remarks  Notes and RcferencesChapter 1 Rosenblatt's Perceptrou  1.1 Introduction  1.2 Perceptron  1.3 1he Pcrceptron Convergence Theorem  1.4 Relation Between the Perceptron and Bayes Classifier for a Gaussian Environment  1.5 Computer Experiment:Pattern Classification  1.6 The Batch Perceptron Algorithm  1.7 Summary and Discussion   Notes and Refercnces  Problems Chapter 2 Model Building through Regression 2.1 Introduction 68  2.2 Linear Regression Model:Preliminary Considerafions   2.3 Maximum a Posteriori Estimation ofthe ParameterVector   2.4 Relationship Between Regularized Least-Squares Estimation and MAP Estimation   2.5 Computer Experiment:Pattern Classification   2.6 The Minimum.Description-Length Principle   2.7 Rnite Sample—Size Considerations   2.8 The Instrumental,variables Method   2 9 Summary and Discussion   Notes and References   Problems Chapter 3 The Least—Mean-Square Algorithm  3.1 Introduction  3.2 Filtering Structure of the LMS Algorithm  3.3 Unconstrained optimization:a Review  3.4 ThC Wiener FiIter  3.5 ne Least.Mean.Square Algorithm  3.6 Markov Model Portraying the Deviation of the LMS Algorithm from the Wiener Filter  3.7 The Langevin Equation:Characterization ofBrownian Motion  3.8 Kushner’S Direct.Averaging Method  3.9 Statistical LMS Learning Iheory for Sinail Learning—Rate Parameter  3.10 Computer Experiment I:Linear PTediction  3.11 Computer Experiment II:Pattern Classification  3.12 Virtucs and Limitations of the LMS AIgorithm  3.13 Learning.Rate Annealing Schedules  3.14 Summary and Discussion   Notes and Refefences   Problems Chapter 4 Multilayer Pereeptrons  4.1  IntroductlOn  4.2 Some Preliminaries  4.3 Batch Learning and on.Line Learning  4.4 The Back.Propagation Algorithm  4 5 XORProblem  4.6 Heuristics for Making the Back—Propagation Algorithm PerfoITn Better  4.7 Computer Experiment:Pattern Classification  4.8 Back Propagation and Differentiation  4.9 The Hessian and lIs Role 1n On-Line Learning  4.10 Optimal Annealing and Adaptive Control of the Learning Rate  4.11 Generalization  4.12 Approximations of Functions  4.13 Cross.Vjlidation  4.14 Complexity Regularization and Network Pruning  4.15 Virtues and Limitations of Back-Propagation Learning  4.16 Supervised Learning Viewed as an Optimization Problem  4.17 COUVOlutionaI Networks  4.18 Nonlinear Filtering  4.19 Small—Seale VerSus Large+Scale Learning Problems  4.20 Summary and Discussion   Notes and RCfcreilces   Problems Chapter 5 Kernel Methods and Radial-Basis Function Networks  5.1 Intreduction  5.2 Cover’S Theorem on the Separability of Patterns  5.3 1he Interpolation Problem  5 4 Radial—Basis—Function Networks  5.5 K.Mcans Clustering  5.6 Recursive Least-Squares Estimation of the Weight Vector  5 7 Hybrid Learning Procedure for RBF Networks  5 8 Computer Experiment:Pattern Classification  5.9 Interpretations of the Gaussian Hidden Units  5.10 Kernel Regression and Its Relation to RBF Networks  5.11 Summary and Discussion   Notes and References   Problems Chapter 6 Support Vector Machines Chapter 7 Regularization TheoryChapter 8 Prindpal-Components AaalysisChapter 9 Self-Organizing MapsChapter 10 Information-Theoretic Learning ModelsChapter 11 Stochastic Methods Rooted in Statistical MechanicsChapter 12 Dynamic Programming Chapter 13 Neurodynamics Chapter 14 Bayseian Filtering for State Estimation ofDynamic Systems Chaptel 15 Dynamlcaay Driven Recarrent NetworksBibliography Index

章节摘录

插图:knowledge, the teacher is able to provide the neural network with a desired responsefor that training vector. Indeed, the desired response represents the "optimum" ac-tion to be performed by the neural network. The network parameters are adjustedunder the combined influence of the training vector and the error signal. The errorsignal is defined as the difference between the desired response and the actual re-sponse of the network. This adjustment is carried out iteratively in a step-by-stepfashion with the aim of eventually making the neural network emulate the teacher;the emulation is presumed to be optimum in some statistical sense. In this way,knowledge of the environment available to the teacher is transferred to the neuralnetwork through training and stored in the form of"fixed" synaptic weights, repre-senting long-term memory. When this condition is reached, we may then dispensewith the teacher and let the neural network deal with the environment completelyby itself.The form of supervised learning we have just described is the basis of error-correction learning. From Fig. 24, we see that the supervised-learning process con-stitutes a closed-loop feedback system, but the unknown environment is outside theloop. As a performance measure for the system, we may think in terms of the mean-square error, or the sum of squared errors over the training sample, defined as a func-tion of the free parameters (i.e., synaptic weights) of the system. This function maybe visualized as a multidimensional error-performance surface, or simply error surface,with the free pai'ameters as coordinates.The true error surface is averaged over allpossible input-output examples. Any given operation of the system under theteacher's supervision is represented as a point on the error surface. For the system toimprove performance over time and therefore learn from the teacher, the operatingpoint has to move down successively toward a minimum point of the error surface;the minimum point may be a local minimum or a global minimum. A supervisedlearning system is able to do this with the useful information it has about the gradient of the error surface corresponding to the current behavior of the system.

编辑推荐

《神经网络与机器学习(英文版第3版)》特色:基于随机梯度下降的在线学习算法;小规模和大规模学习问题。核方法,包括支持向量机和表达定理。信息论学习模型,包括连接、独立分量分析(ICA),一致独立分量分析和信息瓶颈。随机动态规划,包括逼近和神经动态规划。逐次状态估计算法,包括Kalman和粒子滤波器。利用逐次状态估计算法训练递归神经网络。富有洞察力的面向计算机的试验。

图书封面

图书标签Tags

评论、评分、阅读与下载


    神经网络与机器学习 PDF格式下载


用户评论 (总计20条)

 
 

  •   一直在找这本书的英文版,中文翻译版的《神经网络原理》,其翻译简直是折磨人。这个是新版的,很好。我们一下子买了4本。ps:就是纸张太小了~15cm X 21.4cm.
  •   特别专业,但对数学要求比较高此外还需要懂一点矩阵论,动态系统,泛函之类的
  •   看了一部分,存在一些印刷错误,字太小了,总体还可以。
  •   对于研究和学习神经网络具有重要的参考价值,结合Michell的机器学习一起学习效果更好。
  •   东西不错,可惜我的英文不太好;还得补英文,里面的一些专业词汇还是找工具书……请大家不要吐槽我
  •   字太小,看不清,纸太薄。真是想不通,怎么能印的这么差。书整体一般吧,不能算是深入浅出,所以基础薄弱的就别看了
  •   说好的开的发票一直都没有开,让人觉得很不好!
  •   的确是一本经典好书,但是32开小本,而且字体太小,看起来比较费劲!
  •   字体太小了,根本看不清,唉,出版社坑人啊!
  •   看了1/6吧,感觉不错,比较适合初学者
  •   不错,很不错!It's what I want.
  •   可以作为参考书手里备一本,有时会用到
  •   34.5买的,收到货时书上还一层透明薄膜保护着,外加亚马逊自己的包装书很厚900多页,印刷还可以,至少比盗版强,字体小,不过比看电子书要好N倍书的内容还没怎么看,粗略看了目录,感觉对数学要求很高其它我自己看一段时间再说
  •   后悔买了英文的,因为书比较厚,课程又比较多,没有那么多时间专门研究英文的!
  •   还没有看。应该是大牛的作品
  •   32开的书字很小
  •   包装完整,送货速度快。
  •   知识比推理有时候更重要
  •   神经网络与机器学习(英文版·第3版)
  •   等我看看
 

250万本中文图书简介、评论、评分,PDF格式免费下载。 第一图书网 手机版

京ICP备13047387号-7