Fourier Transforms improve the performance of a Machine Learning classifier
Recently, I have been involved in developing a dataset for promoting learning. The work is still in progress… What I would like to highlight from my findings on classifying the neighborhood of a layout of a building originating from a distribution is that Feature Trawling and Feature identification process are drastically improved using Fourier Transforms. The synthetic data that I use makes use of Gaussian and uniform distributions to generate the dimensions of each data row.
In the example, I used a LinearSVR regressor for fitting the domain knowledge of all the generated bounded areas into a single regressor algorithm.
I have transformed the labels into symbolic values that originate from a Fourier transform.
enscalo / neighborhood-classification / source / - Bitbucket
The layouts and the bounded area is classified using mass flux as the parameter.
The algorithm seems to improve its training accuracy provided the algorithm’s initial implementation works off the shelf.
A Fourier transform is generated using simple code, given as:
import numpy as npnp.fft(np.arange(0,10,1))
The outcome in using Fourier transforms is helpful in drawing out conclusions on how we set the label from the originally assigned labels. In my case, this was simple as the neighborhood classification value originated from formulated values. My idea was to use the dimensions of the windows, doors, and transits (the open passage) to correlate to two separate metrics that fit our knowledge of the neighborhood.
The reason for such a drastic improvement in the accuracy score is because of the Jacobian matrix. A Jacobian matrix measures how much a measure changes in its volume. The Gradient Descent example has a cost function derivative that can be represented as the product of the Jacobian matrix and the function in parametric form.
My algorithm improved from an error rate of 2.x% to 1.x%.