1 / 71

Identifying Boundary of Different Classes of Objects

Data Classification with the Radial Basis Function Network Based on a Novel Kernel Density Estimation Algorithm Yen-Jen Oyang Department of Computer Science and Information Engineering National Taiwan University. Identifying Boundary of Different Classes of Objects. Boundary Identified.

ramiro
Télécharger la présentation

Identifying Boundary of Different Classes of Objects

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Classification with the Radial Basis Function Network Based on a Novel Kernel Density Estimation AlgorithmYen-Jen OyangDepartment of Computer Science and Information EngineeringNational Taiwan University

  2. Identifying Boundary of Different Classes of Objects

  3. Boundary Identified

  4. The Proposed RBF Network Based Classifier • The proposed algorithm constructs one RBF network for approximating the probability density function of one class of objects. • Classification of a new object is conducted based on the likelihood function:

  5. Rule Generated by the Proposed RBF(Radial Basis Function) Network Based Learning Algorithm Let and If then prediction=“O”. Otherwise prediction=“X”.

  6. Problem Definition of Kernel Smoothing • Given the values of function at a set of samples . We want to find a set of symmetric kernel functions and the corresponding weights such that

  7. Kernel Smoothing with the Spherical Gaussian Functions • Hartman et al. showed that a linear combination of spherical Gaussian functions can approximate any function with arbitrarily small error. • “Layered neural networks with Gaussian hidden units as universal approximations”, Neural Computation, Vol. 2, No. 2, 1990.

  8. With the Gaussian kernel functions, we want to find such that

  9. Problem Definition of Kernel Density Estimation • Assume that we are given a set of samples taken from a probability distribution in a d-dimensional vector space. The problem now is how to find a linear combination of kernel functions that approximate the probability density function of the distribution?

  10. The value of the probability density function at a vector can be estimated as follows: where n is the total number of samples, is the distance between vector and its k-th nearest samples, and is the volume of a sphere with radius = in a d-dimensional vector space.

  11. A 1-D Example of Kernel Smoothing with the Spherical Gaussian Functions

  12. The Existing Approaches for Kernel Smoothing with Spherical Gaussian Functions • One conventional approach is to place one Gaussian function at each sample. As a result, the problem becomes how to find for each sample such that

  13. The most widely-used objective is to minimize where are test samples and S is the set of training samples. • The conventional approach suffers high time complexity, approaching , due to the need to compute the inverse of a matrix.

  14. M. Orr proposed a number of approaches to reduce the number of units in the hidden layer of the RBF network. • Beatson et. al. proposed O(nlogn) learning algorithms using polyharmonic spline functions.

  15. An O(n) Algorithm for Kernel Smoothing • In the proposed learning algorithm, we assume uniform sampling. That is, samples are located at the crosses of an evenly-spaced grid in the d-dimensional vector space. Let denote the distance between two adjacent samples. • If the assumption of uniform sampling does not hold, then some sort of interpolation can be conducted to obtain the approximate function values at the crosses of the grid.

  16. A 2-D Example of Uniform Sampling

  17. The Basic Idea of the O(n) Kernel Smoothing Algorithm • Under the assumption that the sampling density is sufficiently high, i.e. , we have the function values at a sample and its k nearest samples, , are virtually equal. That is, . • In other words, is virtually a constant function equal to in the proximity of

  18. Accordingly, we can expect that

  19. A 1-D Example

  20. In the 1-D example, samples at located at , where i is an integer. • Under the assumption that , we have and • The issue now is to find appropriate and such that

  21. If we set ,then we have

  22. Therefore, with , we can set and obtain for

  23. In fact, it can be shown that with , is bounded by • Therefore, we have the following function approximator:

  24. Generalization of the 1-D Kernel Smoothing Function • We can generalize the result by setting , where is a real number. • The table on the next page shows the bounds of with various values.

  25. An Example of the Effect of Different Setting of β

  26. The Smoothing Effect • The kernel smoothing function is actually a weighted average of the sampled function values. Therefore, selecting a larger value implies that the smoothing effect will be more significant. • Our suggestion is set

  27. An Example of the Smoothing Effect The smoothing effect Elimination of the smoothing effect with a compensation procedure

  28. Compensation of the Smoothing Effect and Handling of Random Noises • Let denote the observed function value at sample , where is the random noise due to the sampling procedure. • The expected value of the random noise at each sample is 0.

  29. According to the law of large numbers, for any real number , we have • Therefore, if then

  30. The iterative compensation procedure:

  31. The General Form of a Kernel Smoothing Function in the Multi-Dimensional Vector Space • Under the assumption that the sampling density is sufficiently high, i.e. , we have the function values at a sample and its k nearest samples, , are virtually equal. That is, .

  32. As a result, we can expect that where are the weights and bandwidths of the Gaussian functions located at , respectively.

  33. Since the influence of a Gaussian function decreases exponentially as the distance increases, we can set k to a value such that, for a vector in the proximity of sample , we have

  34. Since we have our objective is to find and such that

  35. Let Then, we have

  36. Therefore, with , is virtually a constant function and • Accordingly, we want to set

  37. Finally, by setting uniformly to , we obtain the following kernel smoothing function that approximates f(v):

  38. Generally speaking, if we set uniformly to , we will obtain

  39. Application in Data Classification • One of the applications of the RBF network is data classification. • However, recent development in data classification focuses on the support vector machines (SVM), due to accuracy concern. • In this lecture, we will describe a RBF network based data classifier that can delivers the same level of accuracy as the SVM and enjoys some advantages.

  40. The Proposed RBF Network Based Classifier • The proposed algorithm constructs one RBF network for approximating the probability density function of one class of objects based on the kernel smoothing algorithm that we just presented.

  41. The Proposed Kernel Density Estimation Algorithm for Data Classification • Classification of a new object is conducted based on the likelihood function:

  42. Let us adopt the following estimation of the value of the probability density function at each training sample:

  43. In the kernel smoothing problem, we set the bandwidth of each Gaussian function uniformly to , where is the distance between two adjacent training samples. • In the kernel density estimation problem, for each training sample, we need to determine the average distance between two adjacent training samples of the same class in the local region.

  44. In the d-dimensional vector space, if the average distance between samples is , then the number of samples in a subspace of volume V is approximately equal to • Accordingly, we can estimate by

  45. Accordingly, with the kernel smoothing function that we obtain earlier, we have the following approximate probability density function for class-m objects:

  46. An interesting observation is that, regardless of the value of , we have . • If the observation holds generally, then

  47. In the discussion above, is defined to be the distance between sample and its nearest training sample. • However, this definition depends on only one single sample and tends to be unreliable, if the data set is noisy. • We can replace with

More Related