2. 深度学习Deep Learning介绍
Depth 概念:depth: the length of the longest path from an input to an output.
Deep Architecture 的三个特点:深度不足会出现问题;人脑具有一个深度结构(每深入一层进行一次abstraction,由lower-layer的features描述而成的feature构成,就是上篇中提到的feature hierarchy问题,而且该hierarchy是一个稀疏矩阵);认知过程逐层进行,逐步抽象
3篇文章介绍Deep Belief Networks,作为DBN的breakthrough
这三个点是Deep Learning Algorithm的精髓,我在上一篇文章中也有讲到,其中第三部分:Learning Features Hierachy & Sparse DBN就讲了如何运用Sparse DBN进行feature学习。
4. Deep Learning 经典阅读材料:
• The monograph or review paper Learning Deep Architectures for AI (Foundations & Trends in Machine Learning, 2009).
• The ICML 2009 Workshop on Learning Feature Hierarchies webpage has a list of references.
• The LISA public wiki has a reading list and a bibliography.
• Geoff Hinton has readings from last year’s NIPS tutorial.
阐述Deep learning主要思想的三篇文章:
• Hinton, G. E., Osindero, S. and Teh, Y., A fast learning algorithm for deep belief netsNeural Computation 18:1527-1554, 2006
• Yoshua Bengio, Pascal Lamblin, Dan Popovici and Hugo Larochelle, Greedy Layer-Wise Training of Deep Networks, in J. Platt et al. (Eds), Advances in Neural Information Processing Systems 19 (NIPS 2006), pp. 153-160, MIT Press, 2007<比较了RBM和Auto-encoder>
• Marc’Aurelio Ranzato, Christopher Poultney, Sumit Chopra and Yann LeCun Efficient Learning of Sparse Representations with an Energy-Based Model, in J. Platt et al. (Eds), Advances in Neural Information Processing Systems (NIPS 2006), MIT Press, 2007<将稀疏自编码用于回旋结构(convolutional architecture)>
06年后,大批deep learning文章涌现,感兴趣的可以看下大牛Yoshua Bengio的综述Learning deep architectures for {AI},不过本文很长,很长……
5. Deep Learning工具—— Theano
Theano是deep learning的Python库,要求首先熟悉Python语言和numpy,建议读者先看Theano basic tutorial,然后按照Getting Started 下载相关数据并用gradient descent的方法进行学习。
学习了Theano的基本方法后,可以练习写以下几个算法:
有监督学习:
1. Logistic Regression - using Theano for something simple
2. Multilayer perceptron - introduction to layers
3. Deep Convolutional Network - a simplified version of LeNet5
无监督学习:
• Auto Encoders, Denoising Autoencoders - description of autoencoders
• Stacked Denoising Auto-Encoders - easy steps into unsupervised pre-training for deep nets
• Restricted Boltzmann Machines - single layer generative RBM model
• Deep Belief Networks - unsupervised generative pre-training of stacked RBMs followed by supervised fine-tuning
最后呢,推荐给大家基本ML的书籍:
• Chris Bishop, “Pattern Recognition and Machine Learning”, 2007
• Simon Haykin, “Neural Networks: a Comprehensive Foundation”, 2009 (3rd edition)
• Richard O. Duda, Peter E. Hart and David G. Stork, “Pattern Classification”, 2001 (2nd edition)