Parallel k-Nearest Neighbor

Example of k-nearest neighbour classification

Image via Wikipedia

K-Nearest Neighbor or KNN algorithm is part of supervised learning that has been used in many applications including data mining, statistical pattern recognition, and image processing. The algorithm doesn’t build a classification model but instead it is based on values found in storage or memory. To identify the class of an input, the algorithm chooses the class to which the majority of the input’s k closest neighbors belong to. The KNN algorithm is considered as one of the simplest machine learning algorithms. However it is computationally expensive especially when the size of the training set becomes large which would cause the classification task to become very slow. Several attempts have been made to parallelize KNN on the GPU by taking advantage of the natural parallel architecture of GPU [5]. However, in this paper the KNN algorithm was parallelized on CPU by distributing the distance computations of the k nearest neighbors among different processors. The parallel implementation greatly increased the speed of the KNN algorithm by reducing its time complexity from O(D) ,where D is the number of records, to O(D/p) where p is the number of processors.

Keywords: K Nearest Neighbor, GPU, manycore, CPU, parallel processors.

INTRODUCTION

The KNN algorithm is a widely applied method for classification in machine learning and pattern recognition. It was known to be computationally intensive when given large training sets, and did not gain popularity until the 1960s when increased computing power became available.

Nearest-neighbor classifiers are based on learning by analogy, that is, by comparing a given test tuple with training tuples that are similar to it. The training tuples are described by n attributes. Each tuple represents a point in an n-dimensional space. In this way, all of the training tuples are stored in an n-dimensional pattern space. When given an unknown tuple, a k-nearest-neighbor classifier searches the pattern space for the k training tuples that are closest to the unknown tuple. These k training tuples are the k “nearest neighbors” of the unknown tuple. “Closeness” is defined in terms of a distance metric, such as Euclidean distance. The Euclidean distance between two points or tuples, say, X1 = (x11, x12, …, x1n) and X2 = (x21, x22, …, x2n), is:

 

dist(X1,X2) = (1)

 

The above algorithm applies for numerical data. For categorical attributes, a simple method is to compare the corresponding values of the attributes in tuple X1 with those in tuple X2. If the two are identical, then the difference between the two is taken as 0. If the two are different, then the difference is considered to be 1. Other methods may incorporate more sophisticated schemes for differential grading.

When computed in this way on a serial machine, the time complexity is clearly linear with respect to the number of data points. Hence, there is an interest in mapping the process onto a highly parallel machine in order to further optimize the running time of the algorithm. It should be noted, however, that serial implementations of the k-NN rule employing branch and bound search algorithms[1] (a systematic method for solving optimization problems) can scale sublinearly, such that the asymptotic time complexity may be constant with respect to the number of data points. Nonetheless, a fully parallel hardware implementation should still be much faster than the most efficient serial implementations.

Many parallel methods were conducted to increase the speed of the KNN algorithm including:

1.     The method uses Neural Networks for constructing a multi-layer feed-forward network that implements exactly a 1-NN rule. The advantage of this approach is that the resulting network can be implemented efficiently. The disadvantage is that the training time can’t grow exponentially for high dimensional pattern spaces, which could make it impractical.

 

2.     A CUDA implementation of the “brute force” kNN search described in [6] is performed. The advantage of this method is the highly parallelizable architecture of the GPU.

 

In this paper, the “brute force” kNN is studied and implemented on CPU rather than a GPU where the degree of parallelism is indicated by the number of available cores or processors. The proposed algorithm is not expected to outperform state-of-the art GPU implementations but rather, to provide an equivalent performance on CPU. Hence, the benefit becomes the ability of load sharing between CPU and GPU without degradation or loss of speed upon switching between any of the two processor architectures.

PROPOSED METHOD

 

The nature of the brute force kNN algorithm can be assumed to be highly parallelizable[2] by nature, since computation of the distance between the input sample and any single training sample is independent of the distance computation to any other sample. This allows for partitioning the computation work with least synchronization effort. In fact, no inter communication or message passing is required at all during the time each processor is computing the distance between samples in its local storage and the input sample. When all processors terminate the distance computation procedure, the final step is to select a master processor to collect the results from all processors, sort the distances in ascending order, and then use the first 8 measures to determine the class of the input sample.

 

The proposed algorithm is described in the following steps:

1.     Select 1 processor to be the master, the other N-1 processors are slaves.

2.     Master divides the training samples to N subsets, and distributes 1 subset for each processor, keeping 1 subset for local processing (Master participates in distance computation too).

3.     Each individual processor now computes the distance measures independently and storing the computes measures in a local array

4.     When each processor terminates distance calculation, the processor transmits a message to the master indicating end of processing

5.     Master then notes the end of processing for the sender processor and acquires the computes measures by copying them into its own array

6.     After the master has claimed all distance measures from all processors, the following steps are performed:

a.      Sort all distance measures in ascending order

b.     Select top k measures

c.      Count the number of classes in the top k measures

d.     The input element’s class will belong the class having the higher count among top k measures

EXPERIMENTS& RESULTS

 

The goal of this experiment was to study the performance of parallelizing KNN on CPU for large data sets versus parallel GPU implementations. Iris database was used to train the system which is “perhaps the best known database to be found in the pattern recognition literature” [7]. The data set contains three classes of fifty instances each, where each class refers to a type of iris plant. Five attributes are present in this database which are Sepal Length, Sepal Width, Petal Length, Petal Width, and the class label attribute which can contain three values: “Iris-setosa”, “Iris-virginica” and “Iris-versicolour”.  These datasets were preprocessed before running the algorithm by building new data structures so that they can fit in memory. However, given the small number of records in the Iris database, the experiment would not reflect solid results, thus all the 50 records were cloned and randomly appended 1000 times on to a new larger Iris database of 50,000 total records. The experiment was implemented on an Intel 8-core machine and the obtained results were compared with respective implementation on a 64-core GPU. In order to accommodate for the biased number of cores on the GPU, the degree of parallelism was set to 8 in order to properly compare with the number of cores on the CPU, so 8 cores were used out of the available 64-cores. By applying the above mentioned parallel procedures, the CPU program was able to compete with the GPU showing equivalent performance but outperforming the GPU after repeating the test for several times. This was due to cache locality because after several repeated runs, chances of cache misses became less frequent and therefore fetching time from memory incurred by continuous trips was diminished. This also improved Bus utilization and hence power consumption was reduced.  These results were expected because of the nature of KNN is that it highly parallelizable and it scales well with many-core architectures. Complexity of the parallel algorithm is O(D/p) where D is the number of records and p is the number of available cores. Hence, given p processors, complexity reduces to constant time O(1).

 

CONCLUSION

 

In this paper parallel KNN algorithm was implemented by applying a new approach to facilitate computation of the distance measures of all data points. The implementation of the parallel technique reduced the running time of the algorithm on CPU which would make the algorithm a faster, more efficient than the serial kNN and competitor to state-of-the art GPU based implementation.

ChiMerge discretization algorithm

Chi merge is a simple algorithm that uses the chi-square statistic to discretize numeric attributes. It is a supervised, bottom-up data discretization method. It checks each pair of adjacent rows in order to determine if the class frequencies of the two intervals are significantly different. It tests the following hypothesis: that the two adjacent intervals are independent. If the hypothesis is confirmed the intervals are merged into a single interval, if not, they remain separated.

In this problem we select one of the following as an attribute: sepal length, sepal width, petal length, and petal width. Three Classes are available which are Iris-setosa, Iris-versicolor, Iris-virginica.

Chi merge algorithm is described below:

  1. Set the values for minintervals and maxintervals in order not to result in a large number or small number of intervals. Choose minintervals and maxintervals such that:

2 ≤ minintervals < maxintervals

  1. Look up the threshold value to use based on the significance level and the number of degrees of freedom. The threshold value can be found in statistical tables depending on two factors:
  • The significance level, the higher the significance level, the higher the threshold value and the more likely the adjacent intervals will be merged.
  • The number of degrees of freedom=(2-1)*([number of classes]-1) = [number of classes] -1

Here we have 3 classes thus the number of degrees of freedom is 2.

  1. A) Sort the attributes into ascending numerical order.

B) Create a frequency table containing one row for each distinct attribute value and one column for each class. Enter the number of occurrences of each distinct value of the attribute for each possible class in the cells of the table. Then generate a set of distinct intervals. Set the interval lower bound equal to the attribute value (inclusive) that belongs to this interval, and set its upper bound to the attribute value (exclusive) belonging to the next interval.

C) If the number of rows = minintervals then stop otherwise go to next step.

D) For each pair of adjacent rows in the frequency table calculate e the expected frequency value for that combination from the product of row and column sums divided by the total number of occurrences in the two rows combined. The expected value is the frequency value that would be expected to occur by chance given the assumption of independence. Calculate the value of (O – E) 2/E (if E<0.5, replace E by 0.5) where O is the observed frequency value for that combination, then add the values obtained to give chi 2 for that pair of adjacent rows. No chi2 is calculated for the final interval because there is not one below it.

E) Chi merge selects the smallest value of chi 2.This value is compared     with a threshold, the two intervals are merged based on the following condition if chi2 is less than threshold or if the number of rows > maxintervals, if the condition is met then the two intervals are merged and the attribute for the merged row to that of the first of the two consistent rows and the upper interval bound to the upper bound interval of the next   attribute. Then go to step C, but if the condition is not met then the algorithm stops.

The chi2 values are calculated for the revised frequency table, chi merge proceeds iteratively in this way merging two intervals at each stage until the chi2 for the remaining pairs of intervals are greater than the threshold value and the number of intervals is less than the maximum number of intervals, hence no further merging of intervals is possible and the discretisation is complete.

Check these ChiMerge powerpoint slides that visualizes the above algorithm.

Now lets take the IRIS data set, obtained from http://www.ics.uci.edu/~mlearn/MLRepository.html (UC-Irvine Machine Learning Data Repository), as a data set to be discretized. We will perform data discretization for each of the four numerical attributes using the Chi-Merge method having the stopping criteria be: max-interval = 6.

5.1,3.5,1.4,0.2,Iris-setosa
4.9,3.0,1.4,0.2,Iris-setosa
4.7,3.2,1.3,0.2,Iris-setosa
4.6,3.1,1.5,0.2,Iris-setosa
5.0,3.6,1.4,0.2,Iris-setosa
5.4,3.9,1.7,0.4,Iris-setosa
4.6,3.4,1.4,0.3,Iris-setosa
5.0,3.4,1.5,0.2,Iris-setosa
4.4,2.9,1.4,0.2,Iris-setosa
4.9,3.1,1.5,0.1,Iris-setosa
5.4,3.7,1.5,0.2,Iris-setosa
4.8,3.4,1.6,0.2,Iris-setosa
4.8,3.0,1.4,0.1,Iris-setosa

These attributes refer to Sepal Length, Sepal Width, Petal Length,Petal Width & Class respectively. Lets create two classes that i’ll call “IRISDataItem” and “IRISWorkItem”. The former will be used to store the attribute values for each when the data file is read while the latter will be used for later processing and storing the ChiSquare for each distinct attribute.

Public Class IrisDataItem

    Public Enum IRISClass As Byte
        IrisSetosa = 1
        IrisVersicolour = 2
        IrisVirginica = 3
    End Enum

    Public Property SepalLength As Single
    Public Property SepalWidth As Single
    Public Property PetalLength As Single
    Public Property PetalWidth As Single
    Public Property [Class] As IRISClass

End Class

Public Class IrisWorkItem
    Public Property Attribute As Decimal
    Public Property Frequency As Integer
    Public Property SetosaCount As Integer
    Public Property VersicolorCount As Integer
    Public Property VirginicaCount As Integer
    Public Property Start As Decimal
    Public Property [End] As Decimal
    Public Property ChiSquare As Decimal

    'Operator overloading for merging of two adjacent items
    Public Shared Operator ^(ByVal Item1 As IrisWorkItem, ByVal Item2 As IrisWorkItem) As IrisWorkItem
        Item1.SetosaCount += Item2.SetosaCount
        Item1.VirginicaCount += Item2.VirginicaCount
        Item1.VersicolorCount += Item2.VersicolorCount
        Item1.End = Item2.End
        Item2.Frequency = -1
        Return Item1
    End Operator

    'Operator overloading for calculating the ChiSqaure for two items
    Public Shared Operator &(ByVal Item1 As IrisWorkItem, ByVal Item2 As IrisWorkItem) As Decimal
        Return ChiMerge.ChiSquare(Item1, Item2)
    End Operator
End Class

Our next task is to actually read the data file and fill create a List(Of IRISDataItem) object. This can be easily done by reading the file from disc, or you can embed the file in the application resources and read from there. Here is a couple of functions that handle the two ways of reading the file:

    Private Function ReadDataFile(ByVal Path As String) As StringCollection
        If Not String.IsNullOrWhiteSpace(Path) Then
            Dim reader As New IO.StreamReader(Path)
            Dim data As New StringCollection
            While Not reader.EndOfStream
                data.Add(reader.ReadLine)
            End While
            reader.Close()
            Return data
        Else
            Throw New Exception("Invalid IRIS File Path!")
        End If
    End Function

    Private Function ReadDataFileFromResource() As StringCollection
        Dim strFile As String = System.Text.Encoding.ASCII.GetString(My.Resources.iris)
        Dim reader As New IO.StringReader(strFile)
        Dim data As New StringCollection
        While reader.Peek > 0
            data.Add(reader.ReadLine)
        End While
        reader.Close()
        Return data
    End Function

    Public Shared Function CreateIrisDataItems(ByRef StringItems As StringCollection) As List(Of IrisDataItem)
        If StringItems IsNot Nothing AndAlso StringItems.Count > 0 Then
            Dim items As New List(Of IrisDataItem)
            For Each i In StringItems
                If Not String.IsNullOrWhiteSpace(i) Then
                    Dim item As New IrisDataItem
                    Dim strItem() As String = i.Split(",")
                    item.SepalLength = strItem(0)
                    item.SepalWidth = strItem(1)
                    item.PetalLength = strItem(2)
                    item.PetalWidth = strItem(3)
                    Select Case strItem(4)
                        Case "Iris-setosa"
                            item.Class = IrisDataItem.IRISClass.IrisSetosa
                        Case "Iris-versicolor"
                            item.Class = IrisDataItem.IRISClass.IrisVersicolour
                        Case "Iris-virginica"
                            item.Class = IrisDataItem.IRISClass.IrisVirginica
                    End Select
                    items.Add(item)
                End If
            Next
            Return items
        End If
        Return Nothing
    End Function

    Public Shared Function CreateIrisWorkItems(ByVal AttributeIndex As Integer, ByVal DataItems As List(Of IrisDataItem)) As List(Of IrisWorkItem)
        Dim workItems As New List(Of IrisWorkItem)
        DataItems.ForEach(Sub(dataItem)
                              Dim workItem As IrisWorkItem = Nothing
                              Dim value As Decimal
                              Select Case AttributeIndex
                                  Case 0
                                      workItem = workItems.Find(Function(w) w.Attribute = dataItem.SepalLength)
                                      value = dataItem.SepalLength
                                  Case 1
                                      workItem = workItems.Find(Function(w) w.Attribute = dataItem.SepalWidth)
                                      value = dataItem.SepalWidth
                                  Case 2
                                      workItem = workItems.Find(Function(w) w.Attribute = dataItem.PetalLength)
                                      value = dataItem.PetalLength
                                  Case 3
                                      workItem = workItems.Find(Function(w) w.Attribute = dataItem.PetalWidth)
                                      value = dataItem.PetalWidth
                              End Select
                              If workItem IsNot Nothing Then
                                  workItem.Frequency += 1
                              Else
                                  workItem = New IrisWorkItem
                                  workItem.Frequency = 1
                                  workItems.Add(workItem)
                              End If
                              workItem.Attribute = value
                              Select Case dataItem.Class
                                  Case IrisDataItem.IRISClass.IrisSetosa
                                      workItem.SetosaCount += 1
                                  Case IrisDataItem.IRISClass.IrisVersicolour
                                      workItem.VersicolorCount += 1
                                  Case IrisDataItem.IRISClass.IrisVirginica
                                      workItem.VirginicaCount += 1
                              End Select
                          End Sub)
        Return workItems.OrderBy(Function(w) w.Attribute).ToList
    End Function

Having the data ready in our hands, we can now proceed to implement the ChiSquare function which is basically an implementation of the formula:

Where :

k = number of classes

Aij = number of samples in ith interval, jth class

Eij = expected frequency of Aij = (Ri * Cj) / N

Ri = number of samples in ith interval

Cj = number of samples in jth class

N = total number of samples on the two intervals

If Eij=<0.5 then set Eij to 0.5

    Public Shared Function ChiSquare(ByVal Item1 As IrisWorkItem, ByVal Item2 As IrisWorkItem) As Decimal
        ChiSquare = 0
        Dim table(,) As Integer = {{Item1.SetosaCount, Item1.VersicolorCount, Item1.VirginicaCount}, {Item2.SetosaCount, Item2.VersicolorCount, Item2.VirginicaCount}}
        Dim rowSums() As Decimal = {Item1.SetosaCount + Item1.VersicolorCount + Item1.VirginicaCount, Item2.SetosaCount + Item2.VersicolorCount + Item2.VirginicaCount}
        Dim columnSums() As Decimal = {Item1.SetosaCount + Item2.SetosaCount, Item1.VersicolorCount + Item2.VersicolorCount, Item1.VirginicaCount + Item2.VirginicaCount}
        Dim totalSum As Decimal = rowSums.Sum
        Dim E As Decimal = 0
        For i As Integer = 0 To table.GetLength(0) - 1
            For j As Integer = 0 To table.GetLength(1) - 1
                E = rowSums(i) * columnSums(j) / totalSum
                If E > 0 Then ChiSquare += ((table(i, j) - E) ^ 2) / IIf(E < 0.5, 0.5, E)
            Next
        Next
    End Function

One last thing to do before we implement Chi-Merge algorithm is to setup the initial interval bounds and prepare them. This is easily achieved by taking the attribute value itself to be the lower bound of the interval and the next attribute value to be the upper bound of the interval. Below is an easy way of doing this:

    Public Shared Sub CreateIntervalBounds(ByVal Items As List(Of IrisWorkItem))
        For i As Integer = 0 To Items.Count - 2
            Items(i).Start = Items(i).Attribute
            Items(i).End = Items(i + 1).Attribute
        Next
        Items(Items.Count - 1).Start = Items(Items.Count - 1).Attribute
        Items(Items.Count - 1).End = Items(Items.Count - 1).Attribute + 1
    End Sub

Finally, we are ready to implement the Chi-Merge algorithm. One thing to note about the below implementation is the use of LINQ to Objects and Lambda Expressions to filter out tuples that need to be dropped and merged. So you could probably that the code below will compile only using Visual Studio 2010 and .Net Framework 4.0. I haven’t tested it on Visual Studio 2008 Sp1 but if you do let me know if it compiles well.

    Public Sub Compute(ByRef Items As List(Of IrisWorkItem))
        For i As Integer = 0 To Items.Count - 2
            Items(i).ChiSquare = Items(i) & Items(i + 1) 'calculate chiSquare
        Next
        Dim minChiSquare As Decimal = Items.Take(Items.Count - 1).Min(Function(x) x.ChiSquare)
        If minChiSquare < THRESHOLD Then
            Dim i As Integer = 0
            While i < Items.Count
                If Items(i).ChiSquare = minChiSquare Then
                    Items(i) ^= Items(i + 1) 'merge
                    Dim j As Integer = i
                    i += 1
                    While i < Items.Count - 1 AndAlso Items(i).ChiSquare = minChiSquare
                        Items(j) ^= Items(i + 1) 'merge
                        i += 1
                    End While
                End If
                i += 1
            End While
            Items.RemoveAll(Function(x) x.Frequency = -1)
            If Items.Count >= MIN_INTERVAL Then Compute(Items)
        Else
            If Items.Count > MAX_INTERVAL Then
                THRESHOLD += 0.5
                Compute(Items)
            End If
        End If
    End Sub

Once you run the algorithm, you can compare the obtained intervals and split points with the ones found on the internet. Below are my final results compared to the results on http://sci2s.ugr.es/keel/pdf/algorithm/congreso/1992-Kerber-ChimErge-AAAI92.pdf