博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Overfitting & Regularization
阅读量:6639 次
发布时间:2019-06-25

本文共 2512 字,大约阅读时间需要 8 分钟。

The Problem of overfitting

A common issue in machine learning or mathematical modeling is overfitting, which occurs when you build a model that not only captures the signal but also the noise in a dataset.

Because we want to create models that generalize and perform well on different data-points, we need to avoid overfitting.

In comes regularization, which is a powerful mathematical tool for reducing overfitting within our model. It does this by adding a penalty for model complexity or extreme parameter values, and it can be applied to different learning models: linear regression, logistic regression, and support vector machines to name a few.

Below is the linear regression cost function with an added regularization component.

Regularization

The regularization component is really just the sum of squared coefficients of your model (your beta values), multiplied by a parameter, lambda.

Lambda

Lambda can be adjusted to help you find a good fit for your model. However, a value that is too low might not do anything, and one that is too high might actually cause you to underfit the model and lose valuable information. It’s up to the user to find the sweet spot.

Cross validation using different values of lambda can help you to identify the optimal lambda that produces the lowest out of sample error.

Regularization methods (L1 & L2)

The equation shown above is called Ridge Regression (L2) - the beta coefficients are squared and summed. However, another regularization method is Lasso Regreesion (L1), which sums the absolute value of the beta coefficients. Even more, you can combine Ridge and Lasso linearly to get Elastic Net Regression (both squared and absolute value components are included in the cost function).

L2 regularization tends to yield a “dense” solution, where the magnitude of the coefficients are evenly reduced. For example, for a model with 3 parameters, B1, B2, and B3 will reduce by a similar factor.

However, with L1 regularization, the shrinkage of the parameters may be uneven, driving the value of some coefficients to 0. In other words, it will produce a sparse solution. Because of this property, it is often used for feature selection- it can help identify the most predictive features, while zeroing the others.

It also a good idea to appropriately scale your features, so that your coefficients are penalized based on their predictive power and not their scale.

As you can see, regularization can be a powerful tool for reducing overfitting.

In the words of the great thinkers:

.

转载地址:http://tqivo.baihongyu.com/

你可能感兴趣的文章
Ryouko's Memory Note
查看>>
mysql的my.ini文件详解
查看>>
C++ Primer Plus 笔记第十三章
查看>>
岩心数字化管理系统系列(二)系统管理篇
查看>>
唐雎不辱使命
查看>>
github如何多人开发一个项目
查看>>
html5--3.22 综合实例03
查看>>
去掉字符串间的各种符号
查看>>
Openstack 实现技术分解 (4) 通用技术 — TaskFlow
查看>>
IIS负载均衡之系统架构:使用Application Request Route (ARR)实现三层部署架构
查看>>
3.3链表----在链表中添加元素详解--使用链表的虚拟头结点
查看>>
6C - 开门人和关门人
查看>>
Socket 编程,一个服务器,多个客户端,互相通信
查看>>
Centos7扩展根分区——不增加磁盘
查看>>
【学习笔记】计算机网络-DNS层次查询
查看>>
如何建一个动态web项目
查看>>
[新手学Java]反射学习笔记
查看>>
HTML 5 参考手册
查看>>
冲刺一阶段———个人总结07
查看>>
微信机器人
查看>>