展會信息港展會大全

模式識別(英文影印版.第4版)
來源:互聯(lián)網(wǎng)   發(fā)布日期:2011-09-19 14:21:54   瀏覽:12738次  

導(dǎo)讀:模式識別(英文影印版.第4版)計算機(jī)_人工智能_模式識別 作者:(希臘)Sergios Theodoridis;Konstantinos Koutroumbas 本書是享譽世界的名著,內(nèi)容既全面又相對獨立...

內(nèi)容簡介

  本書是享譽世界的名著,內(nèi)容既全面又相對獨立,既有基礎(chǔ)知識的介紹,又有本領(lǐng)域研究現(xiàn)狀的介紹,還有對未來發(fā)展的展望,是本領(lǐng)域最全面的參考書,被世界眾多高校選用為教材。本書可作為高等院校計算機(jī)。電子、通信。自動化等專業(yè)研究生和高年級本科生的教材,也可作為計算機(jī)信息處理、自動控制等相關(guān)領(lǐng)域的工程技術(shù)人員的參考用書。.
   本書主要特點..
   ·提供了大型數(shù)據(jù)集和高維數(shù)據(jù)的聚類算法以及網(wǎng)絡(luò)挖掘和生物信息學(xué)應(yīng)用的最新資料。
   ·涵蓋了基于圖像分析、光學(xué)字符識別,信道均衡,語音識別和音頻分類的多種應(yīng)用。
   ·呈現(xiàn)了解決分類和穩(wěn)健回歸問題的內(nèi)核方法取得的最新成果。
   ·介紹了帶有boosting方法的分類器組合技術(shù)。
   ·提供更多處理過的實例和圖例,加深讀者對各種方法的了解。
   ·增加了關(guān)于熱點話題的新的章節(jié),包括非線性維數(shù)約減、非負(fù)矩陣分解、實用性反饋。穩(wěn)健回歸、半監(jiān)督學(xué)習(xí),譜聚類和聚類組合技術(shù)。...

作譯者

  Sergios Theodoridis,希臘雅典大學(xué)信息系教授。主要研究方向是自適應(yīng)信號處理、通信與模式識別。他是歐洲并行結(jié)構(gòu)及語言協(xié)會(PARLE-95)的主席和歐洲信號處理協(xié)會(EUSIPCO-98)的常務(wù)主席、《信號處理》雜志編委。.
Konstantinos Koutroumbas,1995年在希臘雅典大學(xué)獲得博士學(xué)位。自2001年起任職于希臘雅典國家天文臺空間應(yīng)用研究院,是國際知名的專家。...
.. <<

目錄

preface .
chapter 1 introduction
1.1 is pattern recognition important?
1.2 features, feature vectors, and classifiers
1.3 supervised, unsupervised, and semi-supervised learning
1.4 matlab programs
1.5 outline of the book
chapter 2 classifiers based on bayes decision theory
2.1 introduction
2.2 bayes decision theory
2.3 discriminant functions and decision surfaces
2.4 bayesian classification for normal distributions
2.5 estimation of unknown probability density functions
2.6 the nearest neighbor rule
2.7 bayesian networks
2.8 problems
references
chapter 3 linear classifiers
3.1 introduction
3.2 linear discriminant functions and decision hyperplanes

.3.3 the perceptron algorithm
3.4 least squares methods
3.5 mean square estimation revisited
3.6 logistic discrimination
3.7 support vector machines
3.8 problems
references
chapter 4 nonlinear classifiers
4.1 introduction
4.2 the xor problem
4.3 thetwo-layer perceptron
4.4 three-layer perceptrons
4.5 algorithms based on exact classification of the training set
4.6 the backpropagation algorithm
4.7 variations on the backpropagation theme
4.8 the cost function choice
4.9 choice of the network size
4.10 a simulation example
4.11 networks with weight sharing
4.12 generalized linear classifiers
4.13 capacity of the/-dimensional space inlinear dichotomies
4.14 polynomial classifiers
4.15 radial basis function networks
4.16 universalapproximators
4.17 probabilistic neural networks
4.18 support vector machines: the nonlinear case
4.19 beyond the svm paradigm
4.20 decision trees
4.21 combining classifiers
4.22 the boosting approach to combine classifiers
4.23 the class imbalance problem
4.24 discussion
4.25 problems
references
chapter 5 feature selection
5.1 introduction
5.2 preprocessing
5.3 the peaking phenomenon
5.4 feature selection based on statistical
hypothesis testing
5.5 the receiver operating characteristics (roc) curve
5.6 class separability measures
5.7 feature subset selection
5.8 optimal feature generation
5.9 neural networks and feature generation/selection
5.10 a hint on generalization theory
5.11 the bayesian information criterion
5.12 problems
references
chapter 6 feature generation i: data transformation and
dimensionality reduction
6.1 introduction
6.2 basis vectors and images
6.3 the karhunen-loeve transform
6.4 the singular value decomposition
6.5 independent component analysis
6.6 nonnegative matrix factorization
6.7 nonlinear dimensionality reduction
6.8 the discrete fourier transform (dft)
6.9 the discrete cosine and sine transforms
6.10 the hadamard transform
6.11 the haartransform
6.12 the haar expansion revisited
6.13 discrete time wavelet transform (dtwt)
6.14 the multiresolution interpretation
6.15 wavelet packets
6.16 a look at two-dimensional generalizations ..
6.17 applications
6.18 problems
references
chapter 7 feature generation ii
7.1 introduction
7.2 regional features
7.3 features for shape and size characterization
7.4 a glimpse at fractals
7.5 typical features for speech and audio classification
7.6 problems
references
chapter 8 template matching
8.1 introduction
8.2 measures based on optimal path searchingtechniques
8.3 measures based on correlations
8.4 deformable template models
8.5 content-based information retrieval:relevance feedback
8.6 problems
chapter 9 context-dependent classification
9.1 introduction
9.2 the bayes classifier
9.3 markov chain models
9.4 the viterbi algorithm
9.5 channel equalization
9.6 hidden markov models
9.7 hmm with state duration modeling
9.8 training markov models via neural networks
9,9 a discussion of markov random fields
9.10 problems
references
chapter 10 supervised learning: the epilogue
10.1 introduction
10.2 error-counting approach
10.3 exploiting the finite size of the data set
10.4 a case study from medical imaging
10.5 semi-supervised learning
10.6 problems
references
chapter 11 clustering: basic concepts
11.1 introduction
11.2 proximity measures
11.3 problems
references
chapter 12 clustering algorithms i: sequential algorithms
12.1 introduction
12.2 categories of clustering algorithms
12.3 sequential clusteringalgorithms
12.4 a modification of bsas
12.5 atwo-threshold sequential scheme
12.6 refinement stages
12.7 neural network implementation
12.8 problems
references
chapter 13 clustering algorithms ii: hierarchical algorithms
13.1 introduction
13.2 agglomerative algorithms
13.3 the cophenetic matrix
13.4 divisive algorithms
13.5 hierarchicalalgorithms for large data sets
13.6 choice of the best number of clusters
13.7 problems
references
chapter 14 clustering algorithms iii. schemes based on
function optimization
14.1 introduction
14.2 mixture decomposition schemes
14.3 fuzzy clustering algorithms
14.4 possibilistic clustering
14.5 hard clustering algorithms
14.6 vector quantization
14.7 problems
references
chapter 15 clustering algorithms iv
15.1 introduction
15.2 clustering algorithms based on graph theory
15.3 competitive learning algorithms
15.4 binary morphology clustering algorithms (bmcas)
15.5 boundary detection algorithms
15.6 valley-seeking clustering algorithms
15.7 clustering via cost optimization (revisited)
15.8 kernel clustering methods
15.9 density-basedalgorithms for large data sets
15.10 clusteringalgorithms for high-dimensional data sets
15.11 other clustering algorithms
15.12 combination of clusterings
15.13 problems
references
chapter 16 cluster validity
16.1 introduction
16.2 hypothesis testing revisited
16.3 hypothesistesting in clustervalidity
16.4 relative criteria
16.5 validity of individual clusters
16.6 clustering tendency
16.7 problems
references
appendix a hints from probability and statistics
appendix b linear algebra basics
appendix c cost function optimization
appendix d basic definitions from linear systems theory
index ...

贊助本站

相關(guān)內(nèi)容
AiLab云推薦
展開

熱門欄目HotCates

Copyright © 2010-2024 AiLab Team. 人工智能實驗室 版權(quán)所有    關(guān)于我們 | 聯(lián)系我們 | 廣告服務(wù) | 公司動態(tài) | 免責(zé)聲明 | 隱私條款 | 工作機(jī)會 | 展會港