Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Programming \ Machine Learning: Classification of the curse of dimensionality     - C language - Traverse pci device (Programming)

- How to update the Linux kernel to improve system performance (Linux)

- Do not find ifconfig eth0 and IP address under CentOS6.5 (Linux)

- Axel install plug under CentOS 5/6 acceleration yum downloads (Linux)

- Limit the use of the request being Nginx Flood attack (Linux)

- Configuring Allatori code confusion when developing general Java applications in NetBeans (Programming)

- CentOS7 method to upgrade the kernel to 3.18 (Linux)

- Oracle 11g How dataguard master repository to Oracle single instance data recovery (Database)

- Using Linux / Unix Text processing (Linux)

- Zabbix monitoring Oracle Database use Orabbix plug (Enhanced Edition) (Database)

- Java regular expression syntax (Programming)

- The Java development environment to build under Ubuntu 14.04 (Linux)

- Java static internal class (Programming)

- RVM installation instructions (Linux)

- C # mobile side and PC-side data exchange (Database)

- There is sort of a directed acyclic graph topology (Programming)

- Revive Adserver ad server installation on Ubuntu 15.04 / CentOS7 (Server)

- Swift 2.0 brief (Linux)

- I use the desktop environment in GNU / Linux combination tool (Linux)

- Use ldap implement Windows Remote Desktop Ubuntu Linux (Linux)

 
         
  Machine Learning: Classification of the curse of dimensionality
     
  Add Date : 2018-11-21      
         
         
         
  When looking at the paper machine learning, often see the author mentions "curse of dimensionality", Chinese translated as "curse of dimensionality", which in the end is what kind of "disaster"? This article will introduce this annoying "curse of dimensionality" and its importance in the classification problem through an example.

Suppose now that there is a group of photos, each photo has a cat or a dog. We wanted to design a classifier can automatically identify the animal in the picture off. To achieve this goal, we first need to consider how the feature photos of the animals expressed in digital form. What is the biggest difference between a cat and a dog is? One might first think of cats and dogs is not the same color, while others may think of cats and dogs sample sizes. Suppose from color to distinguish cat and dog, you can design three characteristics: the average of the red, green and blue mean averages to determine which pictures belong to a class of animals:

1 if 0.5 * red + 0.3 * green + 0.2 * blue> 0.6:
2 return cat
3 else:
4 return dog
However, only these three traits, classification may not be a satisfactory result. Thus, it can add some features: size, texture. Perhaps after adding features, classification results will be increased. However, the feature is not better?
To see the performance of the classifier with the increasing number of variation, over a certain value, performance falling instead of rising. This phenomenon is called "curse of dimensionality."

Continue the previous example. Suppose the number of cats and dogs on the planet is unlimited. Due to limited time and computing power, we only selected 10 photos as training samples. Our aim is based on these 10 photographs training a linear classifier, linear classifier can make the remaining cat or dog photos correctly classified. We only use a feature to identify cats and dogs start
If only one of the characteristics of the words, cats and dogs are almost uniformly distributed in this segment, it is difficult to 10 photos linear classification. So, the situation is what will happen after adding a feature

Add a feature we found that you still can not find a line to separate the cats and dogs. So, consider the need to add a feature
At this time, we finally found a plane separate from cats and dogs. Note that only one feature, the feature space is assumed as a segment length of 5, the sample density is 10/2 = 5. There are two features, the feature size is the 5 * 5 = 25, sample density is 10/25 = 0.4. There are three characteristics, the feature size is the 5 * 5 * 5 = 125, sample density is 10/125 = 0.08. If we continue to increase the number of features, sample density will be more sparse, it is easier to find a hyper-plane training sample separately. Because as the number of features tends to infinity, the sample density is very sparse, the training sample is the possibility of misclassification tends to zero. When we classify the results of the high-dimensional space is mapped to the low-dimensional space, a serious problem arises
See the three-dimensional feature space mapped to the results of a two-dimensional feature space after. Although linear separable training samples in high dimensional feature space, but after mapped into low-dimensional space, the result is just the opposite. In fact, increasing the number of features such high-dimensional space linearly separable, the equivalent of a low-dimensional space in a training complex nonlinear classifier. However, this non-linear classifier too "smart" just learned a few exceptions. If it is used to identify those who never appear in the training sample test sample, the results are usually less than ideal. In fact, this is what we learn in high school had a machine overfitting.
Using only two characteristics of linear classifier misclassification some training samples, high accuracy rate does not seem to FIG. 4, however, the use of two characteristic linear classifier generalization than using three characteristics of linear classifier stronger. Because the use of two characteristic linear classifier learning to not just a special case, but an overall trend for those samples has not been seen to be better distinguish open. In other words, by reducing the number of features, you can avoid overfitting, thus avoiding the "curse of dimensionality."
Another point of interpretation of the "curse of dimensionality." Assuming that only one feature, the feature of the range is 0-1, only cats and dogs each characteristic values ​​are unique. If we want the training sample covering 20% ​​of the value range of features, you will need 20% of the total number of cats and dogs. We've added a feature after feature in order to continue covering 20% ​​of the value range would require 45% of the total number of cats and dogs (0.45 ^ 2 = 0.2). Continue to add a feature, it needs 58% of the total number of cats and dogs (0.58 ^ 3 = 0.2). With the increasing number of features, in order to cover 20% of the characteristic value range, we need more training samples. If there is not enough training samples may occur overfitting.

Through the above examples, we can see the number of features more training samples will be more sparse, classifier parameter estimates will be less accurate, more prone to overfitting. Another effect of "curse of dimensionality" is the sparsity of training samples are not evenly distributed. Training at the center of the sample is more sparse than training samples around.
Suppose there is a two-dimensional feature space rectangle as shown in FIG. 8, inside the rectangle has an inscribed circle. Because the closer the center of the sample is sparse, therefore, compared to the sample within the circle, those samples located in the four corners of the rectangle is more difficult to classify. So, with the increasing number of features, circular area will not change? Here we assume that hypercube (hypercube) edge length d = 1, then calculate the radius of the hypersphere (hypersphere) volume of 0.5 (volume) of the formula.

As seen in the increasing number of features, super-spherical volume is gradually reduced until it goes to zero, but hypercube volume is constant. This result was unexpected, but some described the classification of the "curse of dimensionality": In the high-dimensional feature space, most of the training sample is located in the corner of the hypercube.
Under different dimensions, the distribution of samples. In 8-dimensional feature space, a total of 2 ^ 8 = 256 corners, while 98% of the sample distribution in these corners. With the increasing dimension, Equation 2 will tend to 0, where dist_max and dist_min denote sample-to-center distance between the maximum and minimum.

Thus, in the high-dimensional feature space for sample measurement distance meaningless. Since the basic classifier relies on as Euclidean distance, Manhattan distance, so the excessive number of features, classifier performance will decline.

So, how do we avoid the "curse of dimensionality"? Figure 1 shows the performance of the classifier with the increasing number of variation, over a certain value, performance falling instead of rising. Here's how much a certain value in the end what is it? Currently, there is no way to determine the classification in question is how much this threshold, depending on the number of training samples, the decision complexity and the type of classification boundaries. Theoretically, if the number of training samples infinite, then there will be no "curse of dimensionality", we can use any number of features to train a classifier. In fact, the number of training samples is limited, so you should not use too many features. In addition, those who need accurate nonlinear decision boundary classifiers, such as neural network, knn, decision trees and other generalization often not very good, more prone to overfitting. Therefore, in designing these classifiers should be carefully considered when the number of features. In contrast, those better generalization ability classifier, such as naive Bayesian, linear classifier, etc., may be appropriate to increase the number of features.

Given the characteristics of the N, how do we choose from the best features of the M? The most simple and crude way is to try a combination of all the features, the M pick out the best features. In fact, it is very time-consuming, or impractical. In fact, there have been many feature selection algorithm (feature selection algorithms) to help us determine the number of features and feature selection. In addition, there are many feature extraction methods (feature extraction methods), such as the PCA and the like. Cross-validation (cross-validation) it is also often used to detect and avoid over-fitting problems.
     
         
         
         
  More:      
 
- How MAT Android application memory leak analysis (Programming)
- Python context managers (Programming)
- Android project using the command to create and install the package (Programming)
- RHEL7.0 log system (Linux)
- Ubuntu 15.04 install Complete Guide (Linux)
- KVM add virtual disks (Linux)
- Use UDEV SCSI Rules configured ASM on Oracle Linux 5 and 6 (Database)
- Linux command -nohup & (Linux)
- GitLab remote backup of Linux Shell Scripting (Linux)
- Automatic Clear date directory shell script (Linux)
- Xtrabackup creates a slave node without downtime (Database)
- Virtualbox virtual machine can not copy CentOS Network (Linux)
- MySQL5.7.10 installation documentation (Database)
- SELinux security permissions HTTP + PHP service changes (Server)
- Ubuntu Server 14.04 installation Web server (Linux + Apache + MySQL + PHP) (Server)
- CentOS 7.0 Automatic installation CD-ROM production Comments (Linux)
- How do you temporarily emptied Bash environment variable before running the command on Linux (Linux)
- The most common and most effective security settings under linux (Linux)
- C language function pointer and a callback function (Programming)
- Oracle LONG RAW BLOB CLOB type of presentation (Database)
     
           
     
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.