Dimensionality Reduction
The fundamental problem of dimensionality reduction is how to
discover compact representations of highdimensional data.
Exploratory data analysis and visualisation are of critical importance to many areas of science. The fundamental problem of dimensionality reduction is how to discover compact representations of highdimensional data (1).
Highdimensional datasets are defined by their inclusion of a large number of distinct variables; for example, the health status of patients can include more than 100+ measured parameters from blood analysis, immune system status, genetic background, nutrition, alcohol and drug consumption, operations, treatments and diagnosed diseases (2). Each of these parameters is a dimension (or feature) of the dataset, and can be used to identify patterns and trends. This idea implies that a higher dimensional data set is preferable as it would allow us to distinguish between different conditions and find correlations more effectively, however, this is not the case; effectively reducing the number of dimensions that a dataset contains is a key problem in machine learning. Lower dimensional data is desirable due to the effects of the curse of dimensionality (3).
The Curse of Dimensionality
The curse of dimensionality is the problem of detecting structures within data embedded in high dimensional space. When classifying, objects it is necessary to utilise features, descriptions of some characteristic of each object type that can be represented numerically (so that statistical algorithms and data analysis can be applied). An example would be using the colour values of an image of a car to determine whether it was produced by Audi or BMW. Clearly, this feature alone is not adequate to identify the cars’ manufacturer with any reliability. It is possible, therefore, to increase the dimensionality of the problem by adding more features, determining the shape of each car using edge detection and colour gradients perhaps. These additional features, in combination, could all be used by our classifier to identify the make of each car more accurately.
As mentioned before, it is tempting at this point to increase the dimensionality further, adding more and more features to allow us perfectly classify every make of car using hundreds of different features. It is here that the curse of dimensionality comes into play:
"For a fixed size of training data, increasing the dimensionality (number of features) past some optimal number causes classifier performance to degrade substantially."
This conclusion is based on Hughes’ Phenomenon (4) and is depicted in Figure 1. The cause of the curse of dimensionality comes down to the decreasing density of a constant number of data points as the number of dimensions increases (5). In a highdimensional space, most points selected from some finite, random set of N data points inside a similarly finite volume end up being distant from each other, accumulating in the corners of the space (6).
Using the car example, assume we wanted to classify Audis and BMWs using a single feature (for example, proportion of pixels that are “red”), that varies between 0 and 1, with training data that covered 10% of this total possible range of feature values. The required training sample size for this would be 10% of the complete population of Audis and BMWs. If we added another feature that varied in the same way (going up to a 2 dimensional feature space) and wanted the same 10% coverage of the possible range of feature values, we would require a sample size of more than 31% of the total population in each dimension (see Figure 2).
31% of possible Feature_{1} values × 31% of possible Feature_{2} values = 10% of total range of valuesIf the size of the training data did not increase along with the number of dimensions, the data would become more and more sparse as the dimensionality increased (5). This data sparsity leads to a phenomenon known as overfitting, in which the classifier begins to learn specific details of the training data which are not relevant to the general concept that is being classified (3). Overfitting results in poor classifier performance when asked to classify any realworld data, which is unlikely to conform to the casespecific details that the classifier has learnt. Put simply, a linearly increasing number of dimensions necessitates an exponentially increasing number of training data points to maintain an equivalent level of classifier performance when exposed to realworld data.
To summarise, multiple dimensions are hard to think in, impossible to visualise and the number of possible values grows exponentially with each dimension, making completely enumerating all subspaces almost impossible for high dimensions. Additionally, incorporating a large amount of features increases the levels of noise within the data, as well as causing the training data to become more and more sparse, necessitating exponential increases in training sample size for linear increases in dimensionality, all to avoid overfitting (6).
Reducing the number of dimensions used leads to imperfect classification of the training data, however, classifier performance with realworld data is still greatly superior to highdimensional datasets due to the lack of overfitting.
Locally Linear Embedding
The paper that revolutionised the field of Dimensionality Reduction was titled “Nonlinear Dimensionality Reduction by Locally Linear Embedding” and was produced by Sam Roweis during his tenure at the Gatsby Computational Neuroscience Unit at UCL in collaboration with Lawrence Saul of AT&T’s Research Lab (1). The paper described Locally Linear Embedding (LLE), an unsupervised learning algorithm that “attempts to discover nonlinear structure in high dimensional data by exploiting the local symmetries of linear reconstructions” (7). LLE takes a high dimensional dataset as an input and maps it to a single, global coordinate system of lower dimensionality. The algorithm is adept at generating highly nonlinear embeddings, identifying complex groupings of data with nonlinear relationships in high dimensions and maintaining these groupings whilst mapping data points to a lower dimensional space (1). This is in comparison to Principal Component Analysis, one of the main linear techniques for dimensionality reduction, which simply performs a direct linear mapping of data to a space of lower dimensions, but is inappropriate for use with nonlinear problems.
LLE initially takes one parameter, k, and constructs a knearestneighbours (kNN) graph for each individual point in the dataset. It is important that the k passed to the algorithm is large enough to produce a connected kNN graph; if the chosen k is too small, the resulting kNN graph may be formed of unconnected segments, making it unsuitable for use with LLE. Following the construction of the kNN graph, a set of weights is computed for each point, which best linearly reconstruct the point from its k neighbours (i.e. the best possible descriptions of the point as a linear combination of the k closest points to it). It then uses an optimisation technique based upon eigenvectors to locate a lowdimensional embedding of each of the points, whilst ensuring that each point can still be described by the same linear combination of its k neighbours. This is the key to how LLE retains the structure of highdimensional data in lower dimensions (7).
Problems with LLE
Since its introduction, LLE has become a classic method of nonlinear dimensionality reduction due to its ability to deal with huge amounts of high dimensional data and its innovative approach to locating lowdimensional structures in high dimensions. LLE is also simpler to compute than its competitors (including Isomap, Laplacian eigenmaps, and local tangent space alignment), as well as being able to return useful results on a wider range of datasets (9). Despite this, LLE is still imperfect; a few examples are the following: (10)
 If a lowdensity sample is provided to LLE, or the points have not been sampled uniformly, then LLE is unable to locate nonuniform warps and folds.
 Selection of the parameter k (which is used to calculate the knearestneighbors graph in the first step of the algorithm) has a substantial impact on the performance of LLE.
 LLE is extremely sensitive to any form of noise within the data; small amounts of noise relative to the dataset can often lead to failure in producing lower dimension coordinates.

LLE can encounter illconditioned eigenvalue problems
 A square matrix is illconditioned if it is invertible but can quickly become noninvertible if some of its entries are changed by a small amount
 Solving linear equation systems with coefficient matrices that are illconditioned is difficult, as small variations in the data can result in hugely different answers, for example, take: \[A = \begin{bmatrix}4.5 & 3.1 \\1.6 & 1.1 \end{bmatrix}, \space b =\begin{bmatrix}19.249 \\ 6.843 \end{bmatrix}, \space b_{1} =\begin{bmatrix}19.25 \\ 6.84 \end{bmatrix}\]
 If we solve Ax=b, then x = (3.94, 0.49), but if we solve Ax=b_{1}, then x = (2.9, 2.0)
 The minor change from b_{1} to b could easily occur as a result of rounding, or floating point errors. As we have seen, these minute changes cause huge shifts in the output and LLE’s vulnerability to these illconditioned eigenvalue problems is of concern.
 LLE is an unsupervised algorithm and makes the assumption that all of the data resides on one, continuous manifold, which is inherently untrue for classification problems with multiple classes.
Modifications to LLE
To conquer some of the issues mentioned above, LLE has undergone many extensions from various researchers.
ISOLLE: LLE with Geodesic Distance
The only nonlinear step in the LLE algorithm is the selection of a point’s closest neighbours, and this plays a crucial part in the algorithm’s performance. If the points are sampled in a biased way or contaminated by noise, estimated reconstruction weights used by LLE end up poorly reflecting the local geometry of the manifold, which also affects the lower dimensional embedding. One proposed solution to this is altering the distance metric used by LLE. In the original version of the algorithm, the Euclidean distance was used directly, which can result in short circuits –an assignment of neighbours that are actually very distant from the data point itself, as seen in Figure 4. An extension of LLE (known as ISOLLE) utilises the geodesic distance between points (the number of edges in the shortest path that connects them), instead of Euclidean distance. There is evidence to show that utilising ISOLLE can reduce the number of short circuits occurring whilst locating the neighbours of a point (11).
Improved LLE through New Distance Computing
Utilising a different method of computing distance between points is also an effective way to reduce the effect that the choice of the K parameter has upon LLE’s dimensionality reduction. In standard LLE, the nearestneighbour points cover a larger area when the surrounding region is sparse and a smaller area when the surrounding region is dense with points. It is desirable to eliminate the effects of this uneven dispersal of points so that our low dimensional embedding is less influenced by the sample points’ distribution.
Improved LLE, uses a new distance formula roughly based upon Euclidean distance (12):
\[d_{ij} = \frac{\leftx_{i}x_{j} \right}{\sqrt{M(i)M(j)}} , \space where \space M(i) = \frac{1}{K}\sqrt{\sum_{l=1}^K \left  x_{i}  x_{l} \right  ^ 2}\]The effect of using this modified formula is that the distance between sample points in a dense area becomes larger and the distance between sample points in a sparse area becomes smaller. In essence, the distribution of sample points becomes more proportional to reduce the effects of their original distribution (12). Also, a much larger range of values for K can now be used to produce good results, as seen below in the reduction of a 3dimensional halfcylinder to 2 dimensions – all images from (10).
When k is too small (in the case K = 4), neither LLE or Improved LLE produce acceptable results. From K = 9 onwards, however, Improved LLE exhibits superior performance; standard LLE does not produce good results until K = 19, at which point Improved LLE is still performing well (and has been since K = 9).
Data from the UCI handwritten digit database was used to test the performance of the Improved LLE algorithm. In this dataset, each grayscale image depicting a handwritten character is 8 x 8 (64 dimensions), plus an additional dimension for its label, resulting in a dimensionality of 65 per sample. Extensive values of d and K were tested (where d is the reduced number of dimensions and K is the number of nearest neighbours to select). Following the dimensionality reduction, the resulting data was passed through a Knearestneighbours algorithm to identify the actual class of a query image. Table 1 shows the results of this testing, the values in the table represent the error rate (the average percentage of misclassified patterns in the testing set). It is evident from the data below that the average error rate is much lower when using Improved LLE, with the same values for d and K (10).
Conclusion
Dimensionality reduction is a crucial part of preprocessing data before it is utilised in machine learning. Reducing the number of dimensions in a way that preserves structure is highly desirable and LLE is adept at accomplishing this task. Nevertheless, the algorithm is susceptible to short circuits, vulnerable to noisy data and sensitive to the value of its sole parameter (k). Many extensions to the algorithm have altered the Euclidean distance formula used by standard LLE; these alterations have been effective at fixing short circuits and parameter sensitivity. Extensions also exist to convert LLE to a supervised algorithm, making it suitable for use with problems with multiple classes and many, potentially discontiguous manifolds.