
系統識別號 
U00022505200615111100 
中文論文名稱

粒子群演算法為基礎的演化式學習系統設計及其應用 
英文論文名稱

PSOBased Evolutionary Learning : System Design and Applications 
校院名稱 
淡江大學 
系所名稱(中) 
電機工程學系博士班 
系所名稱(英) 
Department of Electrical Engineering 
學年度 
94 
學期 
2 
出版年 
95 
研究生中文姓名 
陳慶逸 
研究生英文姓名 
ChingYi Chen 
學號 
891350018 
學位類別 
博士 
語文別 
英文 
口試日期 
20060519 
論文頁數 
169頁 
口試委員 
指導教授余繁 委員蘇木春 委員許獻聰 委員詹益光 委員翁慶昌

中文關鍵字 
粒子群演算法
群聚分析
群聚驗證
向量量化
類神經網路

英文關鍵字 
Particle Swarm Optimization
Cluster analysis
Cluster Validity
Vector Quantization
Fuzzy cmeans
Neural Networks

學科別分類 

中文摘要 
本論文回顧了演化式計算的重要方法和技術，從而探討粒子群演算法(PSO)及其在資料探勘、影像壓縮和類神經網路的應用。針對不同的處理問題，我們分析並結合不同的輔助機制來設計粒子群演算法的學習架構，以期得到有效的PSO系統應用平台(framework)。粒子群演算法是主要的演化式計算技術之一，它應用生物群體透過簡單模仿和跟隨其他個體而產生(emergent)系統自我組織和演化的概念，發展出最佳化演算法。PSO利用模仿群體性生物的社會行為(socialonly model)取向和個體認知(cognitiononly model)取向的機率性選擇來搜尋高維度問題空間的最佳解，其演算法相當簡單快速，解題原理又可以有效避開區域最小值，對於多模態最佳解(multimodal)問題提供了一個相當良好的解決方案。在論文的第一部分，我們引進兩種以PSO為基礎的群聚分析演算法。首先我們提出一個結合指數型態距離與Kmeans運算法則的分割式群聚分析架構，它經由使用者在預設群聚數目的條件下產生最佳的分群結果。第二種PSO群聚分析演算法則是整合群聚驗證方法來得到資料探勘問題的最佳群聚數目以及群聚中心，以達到自動分群的目的。論文的第二部分在於提出一個應用在影像壓縮的模糊PSO向量量化器，該方法透過PSO參數學習和模糊推論生成影像向量的最佳化碼簿。相較於傳統的LBG方法，我們所提的架構更具有效性以及強健性。論文的最後一個部分致力於PSO在放射狀基底函數(RBF)神經網路的應用。針對網路的隱藏層節點與權重等參數，我們先使用正規化型式的Fuzzy cmeans演算法(NFCM)進行粗略式(coarselevel)的結構鑑別，再經由結合遞迴式最小平方法則的PSO演算法作細部調整(fine tuninglevel)的訓練；此一創新方法除了能以極小數量的PSO族群來達到實現RBF神經網路訓練的目的之外，其建模的性能和效率表現也得到很大的提升。本論文主要貢獻在於提出粒子群演算法的系統化的學習架構及其在工程設計最佳化問題領域的應用；基於此一泛用架構，未來我們得以快速發展出各種可靠的、高效能的工程最佳化系統。 
英文摘要 
The new paradigm of Swarm Intelligence, called Particle Swarm Optimization (PSO), is one of the wellknown evolutionary computation techniques, which can be considered as an efficient tool to find near optimal solution in a searching space. Especially, PSO is a useful method when the problems to be solved are highdimensional, nonlinear or some specific information is unavailable. PSO combines the socialonly model and the cognitiononly model to select the adjustable parameters to approach optimal solution, its main advantage is its rapid convergence and small computational requirements, which make it a good candidate for solving optimization problems. In this dissertation, the efficient, robust, and flexible PSO algorithms are proposed to generate some artificial intelligence system in solving some applications, such as cluster analysis, image processing, and neural network training.
The first task of this dissertation introduces two types of PSO clustering applications. The first one is given in advance the optimal number of clusters by manual manipulation, and then the PSO is applied to achieve the optimal clustering results. The other one is to use PSO algorithm that includes the cluster validity measure to automatically determine the true number of the cluster centers, and then to extract real cluster centers and to make a good classification.
The second task of this dissertation is to develop an evolutional fuzzy particle swarm optimization (FPSO) learning algorithm to automatically extract the nearoptimum codebook of vector quantization (VQ) for carrying on image compression. Based on the adaptive learning scheme of the PSO and the flexible membership function of the fuzzy inference system, the dissertation also demonstrates the advance of the FPSOVQbased image compression system.
The last issue of this dissertation is focuses on the topic of radial basis function networks (RBFNs) learning. An innovative hybrid recursive particle swarm optimization (HRPSO) learning algorithm with normalized fuzzy cmean (NFCM) clustering is proposed to generate radial basis function networks (RBFNs) modeling system with small numbers of descriptive radial basis functions (RBFs) for fast approximating two complex and nonlinear functions.

論文目次 
Contents
1 Introduction
1.1 Motivation…………………………………………1
1.2 Objectives…………………………………………2
1.3 Thesis Outline……………………………2
2 Literature Review
2.1 Optimization Problems…………………………5
2.2 Optimization Algorithms……………………..7
2.3 Genetic Algorithms……………………………..8
2.4 Simulated Annealing……………………………13
2.5 Particle Swarm Optimization……………….16
3 Problem Definition
3.1 The Clustering Problem………………………24
3.1.1 Definitions………….……………………..24
3.1.2 Distance and Similarity…………………..25
3.1.3 Clustering Methods………………………..29
3.1.3.1 Hierarchical clustering………………..30
3.1.3.2 Partitional clustering………………..33
3.1.3.3 Kmeans clustering……………………..34
3.1.3.4 Fuzzy cmeans clustering……………..35
3.1.4 Cluster Validity………….……………..37
3.2 Vector Quantization……………………40
3.2.1 Definitions and Design Problem…….. 41
3.2.2 Optimality Criteria……………………..43
3.2.3 The LBG Algorithm…….………………..44
3.3 Artificial Neural Networks……………..45
3.3.1 Modeling a Neuron………………………..45
3.3.2 Neural Models in Common Use…………..52
3.3.2.1 Multilayer perceptrons……………..52
3.3.2.2 Radial basis function networks…..56
4 Alternative KPSOClustering Algorithm
4.1 Introduction………………………………59
4.2 Clustering with PSO Algorithm……………61
4.2.1 Objective FunctionBased Clustering…..61
4.2.2 Alternative KPSOclustering……………..62
4.3 Simulation Results…………………………69
4.4 Conclusion…………….………………………78
5 Automatic Particle Swarm Optimization Clustering Algorithm
5.1 Introduction…………………………80
5.2 AUTOPSO Clustering Algorithm………………82
5.3 Simulation Results…………………………91
5.4 Conclusion………………………………101
6 Evolutionary Fuzzy Particle Swarm Optimization Vector Quantization Learning Scheme in Image Compression
6.1 Introduction…………………………104
6.2 Optimal Codebook Design of VQ…………………107
6.2.1 VQ Preliminaries…………………………..107
6.2.2 Fuzzy Particle Swarm Optimization Vector Quantization Design …110
6.3 Illustrated Examples…………………………120
6.4 Conclusion…………………………………130
7 Hybrid Recursive Particle Swarm Optimization Learning Algorithm in The Design of Radial Basis Function Networks
7.1 Introduction…………………………………131
7.2 RBFNs Architecture………………………134
7.3 Fuzzy Clustering………………………136
7.4 Illustrated Examples……………………146
7.5 Conclusion…………………………………151
8 Conclusions
8.1 Summary of Conclusions…………………152
8.2 Future Research. ………………………154
References………………………………………156
Publications…………………………………167
IVList
of Figures
Figure 2.1 Example of global minimizer s as well as a local minimizer s*. ..………….6
Figure 2.2 Examples of binary encoding. ……………………………………………… 10
Figure 2.3 RouletteWheel Selection. ………………………………………………….11
Figure 2.4 A single point crossover. …………………………………………………12
Figure 2.5 Two points crossover. ……………………………………………………12
Figure 2.6 Mutation operation. …………………….…..……………………………..13
Figure 2.7 Pseudocode of the standard GAs. ……………………………………….13
Figure 2.8 Pseudocode of the SA. ……………………………………………………16
Figure 2.9 Graphical representation of PSO formula. …………………………………20
Figure 2.10 Pseudocode of the PSO algorithm. ………..……………..………………21
Figure 3.1 The difference between Manhattan distance and Euclidean distance. …..…..27
Figure 3.2 Examples of distance functions. (a) Euclidean distance, (b) Manhattan distance,
(c) Minkowski distance (p = 5), (d) Minkowski distance (p = 200). …….28
Figure 3.3 A dendogram for hierarchical clustering. ……..…………………………31
Figure 3.4 The relationship between divisive and agglomerative hierarchical clustering
algorithms. …………………………………………………………...32
Figure 3.5 Kmeans clustering algorithm. ……………………………………………..35
Figure 3.6 Cell regions. The codevectors are shown with “
Vparameter
= 3. (b) A twodimensional sigmoidal RBF with center vector = [2,2]
and smoothing parameter = 3. ………………..………………………..57
Figure 3.14 Architecture of Radial basis function network. ……………………………57
Figure 4.1 The encoding of the single particle in the PSO initial population. ……….63
Figure 4.2 The distance measure plot for the alternative metric with different Beta. …..65
Figure 4.3 (a) A twodimensional data set, (b) Distance function of alternative metric, (c)
Distance function of Euclidean norm. ………………….……………66
Figure 4.4 (a) The data set used in Example 1. The clustering results achieved by the (b)
Kmeans, (c) Fuzzy cmeans, (d) AKPSO. ……….…………………….70
Figure 4.5 (a) The data set used in Example 2. The clustering results achieved by the (b)
Kmeans, (c) Fuzzy cmeans, (d) AKPSO. …………………………….71
Figure 4.6 (a) The data set used in Example 3. The clustering results achieved by the (b)
Kmeans, (c) Fuzzy cmeans, (d) AKPSO. ………………………..…….72
Figure 4.7 (a) The threedimensional plot for the fourdimensional Iris data which are iris
setosa (○), iris versicolor (△), and iris virginica (+). The clustering results
achieved by the (b) Kmeans, (c) Fuzzy cmeans, (d) AKPSO. ……..…73
Figure 4.7 (e) The proposed algorithm with and without one step of Kmeans algorithm (by
using IRIS data), where the population size is taken to be 40. ………74
Figure 4.8 (a) The data set used in Example 5. The clustering results achieved by the (b)
Kmeans, (c) Fuzzy cmeans, (d) AKPSO. ……………………………..75
Figure 4.9 (a) The data set containing three spherical clusters with different sizes. The
clustering results achieved by the (b) Kmeans, (c) Fuzzy cmeans, (d)
AKPSO. ………………………………………………………………76
Figure 4.10 (a) The data set used in Example 7. (b) The clustering result achieved by
Kmeans, where the cluster centers being [(0.4838, 0.3822, 0.2222), (1.5070,
0.4489, 0.6027)]. (c) The clustering result achieved by Fuzzy cmeans, where
the cluster centers being [(0.5491, 0.3608, 0.1063), (1.3690, 0.4874, 0.7590)].
(d) The clustering result achieved by AKPSO, where the cluster centers being
[(0.9890, 0.5760, 1.0000), (0.7073, 0.3129, 0.0000)]. …………………77
Figure 5.1 Response of the average computation for special condition [(4.48, 0.0528), (3.0,
1.0), (0.8817, 3.1866)]. …………………………………………………87
VIFigure
5.2 Response of the Kmeans algorithm for special condition [(4.48, 0.0528), (3.0,
1.0), (0.8817, 3.1866)]. ………………………………………………….….88
Figure 5.3 Response of the traditional PSOclustering method in Example 1. (a) Data set
used in Example 1, (b) Performance measure of different K (K = 2, 3, … ,10)
with the traditional PSOclustering method. ……………………………..92
Figure 5.4 Response of AUTOPSO algorithm in Example 1. (a) Fitness value against
generation with Gbest (solid), average Pbest (dash) and average particles
(dashdotted), (b) CS measure against generation for Gbest , (c) Final selected
Gbest , where ‘∆’is disabled and ‘○’is active, (d)The optimal classification
result by the selected cluster centers. ………………………………………94
Figure 5.5 Fitness curve with different population size. ………………………………95
Figure 5.6 Response of AUTOPSO algorithm in Example 2. (a) The data set used in
Example 2, (b) Final Gbest , where ‘∆’is disabled and ‘○’is active, (c) The
optimal classification result by the selected cluster centers. …………….96
Figure 5.7 Response of AUTOPSO algorithm in Example 3. (a) The data set used in
Example 3, (b) Final Gbest , where ‘∆’is disabled and ‘○’is active, (c) The
optimal classification result by the selected cluster centers. …………….98
Figure 5.8 Response of AUTOPSO algorithm in Example 4. (a) The threedimensional
plot for the fourdimensional IRIS data, (b) The threedimensional plot for the
fourdimensional IRIS data: IRIS setosa (○), IRIS versicolor (△), and IRIS
virginica (+), (c) The clustering results achieved by the proposed method. ...100
Figure 5.9(a) Training images data set. ……………………………………………….101
Figure 5.9(b) Clustering result by the proposed method. ……………………………..101
Figure 6.1 385 data points distribution and contour drawing with Euclidean measure
metric. …………………………………………………………………..110
Figure 6.2 385 data points distribution and contour drawing with the proposed fuzzy
partition metric. ……………………………………………………….113
Figure 6.3 The flow chart of the fuzzy particle swarm optimization vector quantization in
the design of image compressed system. …………………………………118
Figure 6.4 Original training images. (a) Lena, (b)Peppers. ………………..………..121
Figure 6.5 PSNR in dB versus the size of the codebook for (a) “Lena”and (b)
VII
“Peppers”. ……………………………………………………………….. 123
Figure 6.6 Zoomin comparison of the Lena Image. (a) Zoomed image of ‘Lena’, (b)
FPSOVQ reconstructed image of Lena (M = 128, PSNR = 34.0536 dB), (c)
LBG reconstructed image of Lena (M = 128, PSNR = 31.8366 dB), (d)
FPSOVQ reconstructed image of Lena (M = 64, PSNR = 32.3117 dB), (e) LBG
reconstructed image of Lena (M = 64, PSNR = 30.4540 dB). …………125
Figure 6.7 Zoomin comparison of the Peppers Image. (a) Zoomed image of ‘Peppers’, (b)
FPSOVQ reconstructed image of Peppers (M = 128, PSNR = 32.6646 dB), (c)
LBG reconstructed image of Peppers (M = 128, PSNR = 31.3953 dB), (d)
FPSOVQ reconstructed image of Peppers (M = 64, PSNR = 31.4145 dB), (e)
LBG reconstructed image of Peppers (M = 64, PSNR = 29.3923 dB). ……126
Figure 6.8 Testing results with Lena image. (a) A256x256 Lena image, (b) Histogram from
original Lena image, (c) FPSOVQ reconstructed image of Lena (M = 256,
PSNR = 35.5636), (d) Histogram from FPSOVQ reconstructed image, (e) LBG
reconstructed image of Lena (M = 256, PSNR = 32.6289), (f) Histogram from
LBG reconstructed image. …………………………………………………128
Figure 6.9 Testing results with Peppers image. (a) A 256x256 Peppers image, (b)
Histogram from original Peppers Image, (c) FPSOVQ reconstructed image of
Peppers (M = 256, PSNR = 35.3088), (d) Histogram from FPSOVQ
reconstructed image, (e) LBG reconstructed image of Peppers (M = 256,
PSNR = 32.0223), (f) Histogram from LBG reconstructed image. ……….129
Figure 7.1. The proposed architecture of the RBFNs. ………………….………..135
Figure 7.2. Examples of distance functions. (a) Euclidean distance (for spherical cluster),
(b) the proposed distance (for spherical cluster), (c) Euclidean distance (for
ellipsoidal cluster), (d) the proposed distance (for ellipsoidal cluster). ....139
Figure 7.3 Simulations comparison for FCM and NFCM methods. ……………….141
Figure 7.4 Learning diagram of the RBFNMS. …………………………………….145
Figure 7.5 ) sin( ) sin( 2 1 x x
(c) Output by HRPSO, (d) fitness value against iteration in PSO and HRPSO
method. …………………………………………………………………150IXList
of Tables
Table 3.1 Activation functions commonly used in artificial neuron structure. ……….48
Table 4.1 Comparison of Kmeans, Fuzzy cmeans, and AKPSO. ……………………….79
Table 5.1 The selected Gbest for Example 1. …………………..…………………….95
Table 5.2 The selected Gbest for Example 2. ……….……………………………….97
Table 5.3 The selected Gbest for Example 3. ……..…..…..…………………………..99
Table 6.1 Performance (PSNR) comparisons between FPSOVQ and LBG (codebook
sizes=4256). ………………………………………………………….124
Table 7.1 Clustering results for FCM and NFCM. ………………………………….142
Table 7.2 Parameter selection by RPSO for Example 1. ………………………….148
Table 7.3 Performance comparisons with different methods. The last two rows are from ref.
[105]. ………………………………………………………………………148
Table 7.4 Parameter values by RPSO for Example 2. …………………….………151
Table 7.5 Performance comparisons with different methods for Example 2. ………151

參考文獻 
REFERENCES
[1] R. Rardin, Optimization in Operations Research, Prentice Hall, New Jersey, USA, 1998.
[2] F. Van den Bergh and A. P. Engelbrecht, “A New Locally Convergent Particle Swarm Optimizer,” Proceedings of the IEEE Conference on Systems, Man and Cybernetics, Hammamet, Tunisia, 2002.
[3] M. Omran, Particle Swarm Optimization Methods for Pattern Recognition and Image Processing, Ph.D. Thesis, University of Pretoria, 2004.
[4] P. Pardalos, A. Migdalas, and R. Burkard, Combinatorial and Global Optimization, World Scientific Publishing Company, 2002.
[5] K. Demirciler, Contributions to Efficient Vector Quantization and Frequency Assignment Design and Implementation, Ph.D. Thesis, University of Southern California, 2003.
[6] D. G. Luenberger, Linear and Nonlinear Programming, Addison–Wesley Publishing Company, 1984.
[7] Y. Shang, Global Search Methods for Solving Nonlinear Optimization Problems, Ph.D. Thesis, University of Illinois, 1997.
[8] Z. Michalewicz and D. Fogel, How to Solve it: Modern Heuristics, SpringerVerlag, Berlin, 2000.
[9] P. Van Laarhoven and E. Aarts, Simulated Annealing: Theory and Applications, Kluwer Academic Publishers, 1987.
[10] S. Kirkpatric, C. D. Gellat, Jr., and M. P. Vecchi, “Optimization by simulated annealing,” Science, Vol. 220, pp. 671680, 1983.
[11] M. DuqueAnton, D. Kunz, and B. Ruber, “Channel assignment for Cellular Radio using Simulated Annealing,” IEEE Trans. on Vehicular Technology, Vol. 42, No. 1, pp.1421, 1993.
[12] F. Glover, “Tabu Search – Part I,” ORSA Journal on Computing, Vol. 1, No. 3, pp. 190206, 1989.
[13] J. H. Holland, Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control and Artificial Intelligence, University of Michigan Press, Ann Arbor, MI, USA, 1975.
[14] T. Back, “Evolutionary Algorithm: Comparisons of Approaches,” Computing with Biological Metaphors, Chapman and Hall, Cambridge, UK, 1994.
[15] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer, 1996.
[16] E. Aarts and J. Korst, Simulated Annealing and Boltzmann Machines: A Stochastic Approach to Combinatorial and Neural Computing, Interscience series in discrete mathematics and optimization, John Wiley & Sons, New York, 1989.
[17] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller, “Equation of state calculations by fast computing machines,” Journal of Chemical Physics, Vol. 21, pp. 1087 – 1092, 1953.
[18] C. W. Gardiner, Handbook of Stochastic Methods, Berlin, Springer, 1983.
[19] M. Lundy and A. Mees, “Convergence of an Annealing Algorithm,” Mathematical Programmming, Vol. 34, pp. 111124, 1986.
[20] S. Kirkpatrick, “Optimization by Simulated Annealing: Quantitative Studies,” Journal of Statistical Physics, Vol. 34, pp. 975986, 1984.
[21] J. Kennedy and R. Eberhart, "Particle Swarm Optimization," Proceedings of the IEEE International Conference on Neural Networks (ICNN), Vol. IV, Perth, Australia, pp.19421948, 1995.
[22] R. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,” Proceedings of the Sixth International Symposium on Micro Machine and Human Science, pp.3943, 1995.
[23] R. Eberhart and Y. Shi, “Particle Swarm Optimization: Developments, Applications and Resources,” Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2001), Seoul, Korea, 2001.
[24] Y. Shi and R. Eberhart, “Parameter Selection in Particle Swarm Optimization. Evolutionary Programming VII,” Proceedings of EP 98, pp.591600, 1998.
[25] X. Hu, PSO Tutorial, http://www.cems.uwe.ac.uk/~jsmith/ci/pso/tutorials.php.htm (visited May 2006).
[26] R. Battiti, M. Brunato, and S. Pasupuleti, Do not be afraid of local minima: Affine Shaker and Particle Swarm, Technical Report, pp.813, May 2005.
[27] Y. Shi and R. Eberhart, “A modied particle swarm optimizer,” Proceedings of the IEEE Congress on Evolutionary Computation (CEC 1998), Piscataway, NJ, pp. 6973, 1998.
[28] J. Kennedy and R. Eberhart, Swarm Intelligence, Morgan Kaufmann Publishers, 2001.
[29] M. Clerc, “The swarm and the queen: towards a deterministic and adaptive particle swarm optimization,” Proceedings of the IEEE Congress on Evolutionary Computation, Washington, DC, Piscataway, NJ: IEEE Service Center, pp 19511957, 1999.
[30] M. Clerc and J. Kennedy, “The particle swarm: explosion, stability, and convergence in a multidimensional complex space,” IEEE Trans. on Evolutionary Computation, Vol. 6, pp. 5873, 2002.
[31] Y. Shi, “Particle Swarm Optimization,” IEEE Neural Networks Society, Feb. 2004.
[32] S. Thodoridis, K. Koutroumbas, Pattern Recognition, Academic Press, San Diego, 1999.
[33] R. N. Dave and R. Krishnapuram, “Robust clustering methods: a united view,” IEEE Trans. on Fuzzy Systems, Vol. 5, No. 2, pp.270293, 1997.
[34] X. L. Xie and G. Beni, “A validity measure for fuzzy clustering,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 13, No. 8, pp. 841847, 1991.
[35] G. Hamerly, Learning Structure and Concepts in Data using Data Clustering, Ph.D. Thesis, University of California, San Diego, 2003.
[36] G. Hamerly and C. Elkan, “Alternatives to the Kmeans Algorithm that Find Better Clusterings,” Proceedings of the ACM Conference on Information and Knowledge Management (CIKM2002), pp. 600607, 2002.
[37] W. Rudin, Principles of Mathematical Analysis, McGrawHill Book Company, New York, 1976.
[38] A. Jain, M. Murty, and P. Flynn, “Data Clustering: A Review,” ACM Computing Surveys, Vol. 31, No. 3, pp.264323, 1999.
[39] C. H. Chou, The Development of Learning Mechanisms and Their Applications, Ph.D. Thesis, University of Tamkang, Taiwan, 2003.
[40] A. K. Jain, M. N. Murty, and P. J. Flynn, “Data Clustering,” ACM Computing Surveys, Vol. 31, No. 3, pp.264323, 1999.
[41] R. H. Turi, ClusteringBased Colour Image Segmentation, Ph.D. Thesis, University of Monash, Australia, 2001.
[42] E. Forgy, “Cluster Analysis of Multivariate Data: Efficiency versus Interpretability of Classification,” Biometrics, Vol. 21, pp.768769, 1965.
[43] J. C. Dunn, "A Fuzzy Relative of the ISODATA Process and Its Use in Detecting Compact WellSeparated Clusters," Journal of Cybernetics, Vol. 3, No. 3, pp. 3257, 1973.
[44] J. C. Bezdek, Pattern Recognition with Fuzzy Objective Function Algoritms, Plenum Press, New York, 1981.
[45] J. C. Bezdek and N. R. Pal, “Some new indexes for cluster validity,” IEEE Trans. on Systems, Man, and Cybernetics, PartB, Vol.28, pp.301315, 1998.
[46] J. C. Bezdek, “Numerical taxonomy with fuzzy sets,” Journal of Mathematical Biology, Vol. 1, pp.5771, 1974.
[47] A. M. Bensaid, L. O. Hall, J. C. Bezdek, L. P. Clarke, M. L. Silbiger, J. A. Arrington, and R. F. Murtagh, “Validityguided Clustering with applications to image segmentation,” IEEE Trans. on Fuzzy Systems, Vol. 4, No. 2, pp.112123, 1996.
[48] X. L. Xie and G. Beni, “A validity measure for fuzzy clustering,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 13, No.8, pp.841847, 1991.
[49] J. C. Dunn, “A fuzzy relative of the ISODATA process and its use in detecting compact, well separated clusters,” Journal of Cybernetics, Vol. 3, No. 3, pp.3557, 1973.
[50] D. L. Davies and D. W. Bouldin, “A cluster separation measure,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 1, No. 4, pp.224227, 1979.
[51] datacompression.com, Vector Quantization, http://www.datacompression.com/vq.shtml (visited Jan. 2006).
[52] A. Gersho and R. M. Gray, Vector Quantization and Signal Compression, Kluwer Academic Publishers, 1992.
[53] Y. Linde, A. Buzo, and R. M. Gray, “An Algorithm for Vector Quantizer Design,' IEEE Trans. on Communications, pp. 702710, 1980.
[54] N. B. Karayiannis and P. I. Pai, "Fuzzy Vector Quantization Algorithms and Their Application in Image Compression," IEEE Trans. on Image Processing, Vol. 4, pp. 11931201, 1995.
[55] D. E. Rumelhart, G. E. Hinton, and R. J. Willimans, “Learning representations by backpropagating errors,” Nature, Vol. 323, pp.533536, 1986.
[56] C. M. Bishop, Neural Networks for Pattern Recognition, Oxford Press, 1995.
[57] J. Moody and C. J. Darkin, “Fast learning in networks of locallytuned processing units,” Neural Computation, Vol. 1, No. 2, pp. 281294, 1989.
[58] M. Powell, "Radial basis functions for multivariable interpolation: a review," Proceedings of the IMA Conference on Algorithms for the Approximation of Functions and Data, pp.143167, 1985.
[59] I. Maqsood, M. R. Khan, and A. Abraham, “Intelligent weather monitoring systems using connectionist models,” Neural, Parallel and Scientific Computations, Vol. 10, pp.157178, 2002.
[60] M. J. Orr, “Regularization in the selection of radial basis function centers,” Neural Computation, Vol. 7, No.3, pp.606623, 1995.
[61] S. Z. Selim and M. A. Ismail, “Kmeans type algorithms: a generalized convergence theorem and characterization of local optimality,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 6, No.1, pp. 8187, 1984.
[62] L. Bottou and Y. Bengio, “Convergence properties of the Kmeans algorithms,” Advances in Neural Information Processing Systems, Vol.7, The MIT Press, Cambridge, MA, pp.585592, 1995.
[63] K. L. Wu and M. S. Yang, “Alternative cmeans clustering algorithms,” Pattern Recognition, Vol. 35, pp. 22672278, 2002.
[64] U. Maulik and S. Bandyopadhyay, "Genetic algorithmbased clustering technique," Pattern Recognition, Vol. 33, pp. 14551465, 2000.
[65] J. L. R. Filho, P. C. Treleaven, and C. Alippi, “Genetic algorithm programming environments,” IEEE Trans. on Compute, Vol. 27, pp.2843, 1994.
[66] M. R. Anderberg, Cluster Analysis for Application, Academic Press, New York, 1973.
[67] M. R. Garey, D. S. Johnson, and H. S. Witsenhausen, “The complexity of the generalized LloydMax problem,” IEEE Trans. on Information Theory, Vol. 28, No. 2, pp.255256, 1982.
[68] C. Y. Chen and F. Ye, “Particle Swarm Optimization Algorithm and Its Application to Clustering Analysis,” Proceedings of the IEEE International Conference on Networking, Sensing and Control, Taipei, Taiwan, pp.789794, 2004.
[69] C. Y. Chen and F. Ye, “Kmeans Algorithm Based on Particle Swarm Optimization,” Proceedings of International Conference on Informatics, Cybernetics, and Systems, IShou University, Taiwan, pp.14701475, 2003.
[70] S. Bandopadhyay and U. Maulik, “Genetic Clustering for Automatic Evolution of Clusters and Application to Image Classification,” Pattern Recognition, Vol 35, pp. 11971208, 2002.
[71] C. C. Wong and B. C. Lin, “Neighborbased clustering algorithm,” International Journal of Electrical Engineering, Vol. 11, No.2, pp.173181, 2004.
[72] C. H. Chou, M. C. Su, and E. Lai, “A new cluster validity measure and its application to image compression,” Pattern Analysis and Applications, Vol. 7, No. 2, pp. 205220, 2004.
[73] J. C. Dunn, “Well separated clusters and optimal fuzzy partitions,” Journal of Cybernetics, Vol. 4, pp. 95104, 1974.
[74] J. T. Tou and R. C. Gonzalez, Pattern Recognition Principles, AddisonWesley, 1974.
[75] F. Ye and C. Y. Chen, “Alternative KPSOClustering Algorithm”, Tamkang Journal of Science and Engineering, Vol. 8, No. 2, pp. 165174, 2005.
[76] D. Feng, S. Wenkang, C. Liangzhou, D.Yong, and Z. Zhenfu, “Infrared image segmentation with 2D maximum entropy method based on particle swarm optimization (PSO),” Pattern Recognitions Letters, Vol. 26, No. 5, pp. 597603, 2005.
[77] H. M. Feng, “Self generation fuzzy modeling systems through hierarchical recursivebased particle swarm optimization,” Cybernetics and Systems: An International Journal, Vol. 36, No. 6, pp. 623639, 2005.
[78] M. C. Su, T. K. Liu, and H. T. Chang, “An efficient initialization scheme for the selforganizing feature map Algorithm,” Proceedings of International Joint Conference on Neural Networks, Washington, DC, pp. 19061910, 1999.
[79] K. L. Oehler and R. M. Gray, “Combining image compression and classification using vector quantization,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 17, No. 5, pp. 461473, 1995.
[80] T. Hofmann and J. M. Buhmann, “Competitive learning algorithms for robust vector quantization,” IEEE Trans. on Signal Processing, Vol. 46, No. 6, pp.16651675, 1998.
[81] L. A. Zadeh, “Fuzzy sets,” Information and Control, Vol. 8, pp.338353, 1965.
[82] N. B. Karayiannis and P. I. Pai, “Fuzzy vector quantization algorithms and their application in image compression,” IEEE Trans. on Image Processing, Vol. 4, No. 9, pp. 11931201, 1995.
[83] X. Kong, R.Wang, and G. Li, “Fuzzy clustering algorithms based on resolution and their application in image compression,” Pattern Recognition, Vol. 35, No. 11, pp.24392444, 2004.
[84] W. Xu, A. K. Nandi, and J. Zhang, “Novel fuzzy reinforced learning vector quantization algorithm and its application in image compression,” IEE ProceedingsVision, Image and Signal Processing, Vol. 150, No. 5, pp.29228, 2003.
[85] F. Pasi, “Genetic algorithm with deterministic crossover for vector quantization,” Pattern Recognition Letters, Vol. 21, No. 1, pp.6168, 2000.
[86] Y. H. Yu, C. C. Chang, and Y. C. Hu, “A geneticbased adaptive threshold selection method for dynamic path tree structured vector quantization,” Image and Vision Computing, Vol. 23, No. 6, pp.597609, 2005.
[87] C. Y. Chen, K. Y. Chen, and F. Ye, “Evolutionarybased Vector Quantizer Design,” Proceedings of International Conference on System & Signals, IShou University, Taiwan, pp.649654, 2005.
[88] H. Ishibuchi, T. Nakashima, and T. Morisawa, “Voting in fuzzy rulebased systems for pattern classification problems,” Fuzzy Sets and Systems, Vol. 103, pp.223238, 1992.
[89] D. F. Akhmetov, Y. Dote, and S.J. Ovaska, “Fuzzy neural network with general parameter adaptation for modeling of nonlinear timeseries,” IEEE Trans. on Neural Networks, Vol. 12, No. 1, pp.148152, 2001.
[90] F. Behloul, B. P. F. Lelieveldt, A. Boudraa, and J. H. C. Reiber, “Optimal design of radial basis function neural networks for fuzzyrule extraction in high dimensional data,” Pattern Recognition, Vol. 35, No. 3, pp.659675, 2002.
[91] S. W. Choi, D. Lee, J. H. Park, and I. B. Lee, “Nonlinear regression using RBFN with linear submodels,” Chemometrics and Intelligent Laboratory Systems, Vol. 65, No. 2, pp.191208, 2003.
[92] C. C. Chuang, J. T. Jeng, and P. T. Lin, “Annealing robust radial basis function networks for function approximation with outliers,” Neurocomputing, Vol. 56, pp.123139, 2004.
[93] O. Ciftcioghu, “GA with orthogonal transformation for RBFN configuration,” Proceeding of IEEE International Conference on Neural Networks, pp. 19341939, 2002.
[94] J. Park and I. W. Sandberg, “Approximation and radial basis function networks,” Neural Computation, Vol. 5, pp.305316, 1993.
[95] M. D. Nam and T. C. Thanh, “Approximation of function and its derivatives using radial basis function networks,” Applied Mathematical Modeling, Vol. 27, No. 3, pp.197220, 2003.
[96] J. C. Dunn, “A fuzzy relative of the ISODATA process and its use in detecting compact wellseparated clusters,” Journal of Cybernetics, Vol. 3, pp.3257, 1974.
[97] Z. L. Gaing, “A Particle swarm optimization approach for optimum design of PID controller in AVR system,” IEEE Trans. on Energy Conversion, Vol. 19, No. 2, pp.384391, 2004.
[98] C. F. Juang, “A Hybrid of genetic algorithm and particle swarm optimization for recurrent network design,” IEEE Trans. on Systems, Man and Cybernetics, Vol. 34, No. 2, pp.9971006, 2003.
[99] J. Kennedy, “The particle swarm: Social adaptation of knowledge,” Proceedings of IEEE International Conference on Evolutionary Computation, Indianapolis, pp. 303308, 1997.
[100] S. Naka, T. Genji, T. Yura, and Y. Fukuyama, “A Hybrid Particle Swarm Optimization for Distribution State Estimation,” IEEE Trans. on Power Systems, Vol. 18, No. 1, pp.6068, 2003.
[101] L. X. Wang, A course in fuzzy systems and control, Prentice Hall, 1997.
[102] C. C. Wong and C. C. Chen, “A GAbased method for constructing fuzzy systems directly from numerical data,” IEEE Trans. on Systems, Man and Cybernetics, Vol. 30, No. 6, pp.905911, 2000.
[103] J. S. Jang, C. T. Sun, and E. Mizutani, NeuroFuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence, Prentice Hall, New Jersey, 1997.
[104] C. C. Wong and C. C. Chen, “A hybrid clustering and gradient descent approach for fuzzy modeling,” IEEE Trans. on Systems, Man and Cybernetics, Vol. 29, pp.686693, 1999.
[105] S. J. Lee and C. H. Ouyang, “A neurofuzzy system modeling with selfconstructing rule generation and hybrid SVDbased learning,” IEEE Trans. on Fuzzy Systems, Vol. 11, No. 3, pp.341353, 2004.
[106] M. Sugeno and T. Yasukawa, “Fuzzylogicbased approach to qualitative modeling,” IEEE Trans. on Fuzzy Systems, Vol. 1, No. 1, pp.731, 1993.
[107] M. Lovbjerg, Improving Particle Swarm Optimization by Hybridization of Stochastic Search Heuristics and SelfOrganized Critically, Master’s Thesis, University of Aarhus, Denmark, 2002.
[108] P. Angeline, “Evolutionary Optimization versus Particle Swarm Optimization: Philosophy and Performance Difference,” Proceedings of the Seventh Annual Conference on Evolutionary Programming, pp.601610, 1998.

論文使用權限 
同意紙本無償授權給館內讀者為學術之目的重製使用，於20070609公開。不同意授權瀏覽/列印電子全文服務。 


