Regression models for data sciense (779323), страница 20
Текст из файла (страница 20)
Considera model Yi = f (Xi ) + ϵ. How can we fit such a model using linear models (often called scatterplotsmoothing)?We’re going to cover a basic technique called regression splines. Consider the modelYi = β0 + β1 Xi +d∑(xi − ξk )+ γk + ϵik=1where (a)+ = a if a > 0 and 0 otherwise and ξ1 ≤ ... ≤ ξd are known knot points. Prove to yourselfthat the mean function:\beta_0 + \beta_1 X_i + \sum{k=1}ˆd (x_i - \xi_k)+ \gamma_k {/$$} is continuous at the knot points.That is, we could draw this function without lifting up the pen.Let’s try a simulated example. The function is a sine curve with noise. We have twenty knot points.## simulate the datan <- 500; x <- seq(0, 4 * pi, length = n); y <- sin(x) + rnorm(n, sd = .3)## the break points of the spline fitknots <- seq(0, 8 * pi, length = 20);## building the regression spline termssplineTerms <- sapply(knots, function(knot) (x > knot) * (x - knot))## adding an intercept and the linear termxMat <- cbind(1, x, splineTerms)## fit the model, notice the intercept is in xMat so we have -1yhat <- predict(lm(y ~ xMat - 1))## perform the plotplot(x, y, frame = FALSE, pch = 21, bg = "lightblue", cex = 2)lines(x, yhat, col = "red", lwd = 2)¹²⁵https://youtu.be/DRKg33tmoAE126Bonus materialThe plot discovers the sine curve fairly well.
However, it has abrupt break points. This is because ourfitted function is continuous at the knot points, but is not differentiable. We can get it to have onecontinuous derivative at those points, by adding squared terms. Adding cubic terms would make ittwice continuously differentiable (so even a little smoother looking). Here’s our squared regressionspline model:Y i = β0 + β1 X i +β2 Xi2+d∑(xi − ξk )2+ γk + ϵik=1splineTerms <- sapply(knots, function(knot) (x > knot) * (x - knot)^2)xMat <- cbind(1, x, x^2, splineTerms)yhat <- predict(lm(y ~ xMat - 1))plot(x, y, frame = FALSE, pch = 21, bg = "lightblue", cex = 2)lines(x, yhat, col = "red", lwd = 2)Plot of the fit after adding the squared terms.Notice how much smoother the fitted (red) curve is now.NotesThe collection of regressors is called a basis.
People have spent a lot of time thinking about basesfor this kind of problem. So, consider this treatment is just a teaser. Further note that a single knotpoint term can fit hockey stick like processes, as long as you know exactly where the knot point is.Bonus material127These bases can be used in GLMs as well. Thus, this gives us an easy method for fitting non-linearfunctions in the linear predictor.
An issue with these approaches in either linear or generarlizedlinear models is the large number of parameters introduced. Most solutions require some method of“regularization”. In this process the effective dimension is reduced by adding a term that penalizeslarge coefficients.Harmonics using linear modelsFinally, we’d like to end with another basis, perhaps the most famous one.
Consider give a musicalchord played continuously, could we use linear models to discover the notes? In the followingsimulation I consider the piano keys from middle C for a full octave¹²⁶.We’re going to generate our chords as sine curves of the specified frequencies. Then we’ll fit a linearmodel with all of the sine curves and look at which coefficients seem large.
Those would make upour chord. I got the note frequencies here¹²⁷.123456789101112131415## Chord finder, playing the white keys on a piano from octave c4 - c5## Note frequencies in the order of C4, D4, E4, F4, G4, A4, B4, C5notes4 <- c(261.63, 293.66, 329.63, 349.23, 392.00, 440.00, 493.88, 523.25)## The time variable (how long the chord is played and how frequently it is digi\tally sampled)t <- seq(0, 2, by = .001); n <- length(t)## The notes for a C Major Chordc4 <- sin(2 * pi * notes4[1] * t); e4 <- sin(2 * pi * notes4[3] * t);g4 <- sin(2 * pi * notes4[5] * t)## Create the chord by adding the three togetherchord <- c4 + e4 + g4 + rnorm(n, 0, 0.3)## Create a basis that has all of the notesx <- sapply(notes4, function(freq) sin(2 * pi * freq * t))## Fit the modelfit <- lm(chord ~ x - 1)¹²⁶https://en.wikipedia.org/wiki/Octave¹²⁷http://www.phy.mtu.edu/~suits/notefreqs.html128Bonus materialPlot of the fitted coefficients.It is interesting to note that what we’re accomplishing is highly related to the famous Discrete FourierTransform.
This is an automatic what to fit all sine and cosine terms available to a set of data. And,the Fast (Discrete) Fourier Transform (FFT) does it about as fast as possible (faster than fitting thelinear model). Here, I give some code to show taking the FFT and plotting the coefficients. Notice itlodes on the three notes comprising the chords.##(How you would really do it)a <- fft(chord); plot(Re(a)^2, type = "l")129Bonus materialFit of the FFT to the dataThanks!Thanks for your time and attention in reading this book. I hope that you’ve learned some of thebasics of linear models and have internalized that these are some incredibly powerful tools. As anext direction, you might consider more coverage of generalized linear models, or looking at thespecific cases for correlated data. Thanks again!Brian.