230 likes | 351 Vues
Homework 2 is now available and due on 02/05/2004 (next Tuesday). This assignment focuses on the logistic regression model, emphasizing the relationship between inputs and outputs in the log-linear function. We will apply the Maximum Likelihood Estimation (MLE) approach to estimate weights, using text classification examples. The goal is to analyze how weights affect the classification of interesting versus uninteresting documents. Discussions will include model comparisons, performance metrics, and challenges such as overfitting and regularization.
E N D
Linear Model (III) Rong Jin
Announcement • Homework 2 is out and is due 02/05/2004 (next Tuesday) • Homework 1 is handed out
Recap: Logistic Regression Model • Assume the inputs and outputs are related in the log linear function • Estimate weights: MLE approach
Example: Text Classification • Input x: a binary vector • Each word is a different dimension • xi = 0 if the ith word does not appear in the document xi = 1 if it appears in the document • Output y: interesting document or not • +1: interesting • -1: uninteresting
Example: Text Classification Doc 1 The purpose of the Lady Bird Johnson Wildflower Center is to educate people around the world, … Doc 2 Rain Bird is one of the leading irrigation manufacturers in the world, providingcomplete irrigation solutions for people…
Example 2: Text Classification • Logistic regression model • Every term ti is assigned with a weight wi • Learning parameters: MLE approach • Need numerical solutions
Example 2: Text Classification • Weight wi • wi > 0: term ti is a positive evidence • wi < 0: term ti is a negative evidence • wi = 0: term ti is irrelevant to whether or not the document is intesting • The larger the | wi |, the more important ti term is determining whether the document is interesting. • Threshold c
Example 2: Text Classification • Dataset: Reuter-21578 • Classification accuracy • Naïve Bayes: 77% • Logistic regression: 88%
Why Logistic Regression Works better for Text Classification? • Common words • Small weights in logistic regression • Large weights in naïve Bayes • Weight ~ p(w|+) – p(w|-) • Independence assumption • Naive Bayes assumes that each word is generated independently • Logistic regression is able to take into account of the correlation of words
Comparison • Discriminative Model • Model P(y|x) directly • Model the decision boundary • Usually good performance • But • Slow convergence • Expensive computation • Sensitive to noise data • Generative Model • Model P(x|y) • Model the input patterns • Usually fast converge • Cheap computation • Robust to noise data • But • Usually performs worse
Problems with Logistic Regression? How about words that only appears in one class?
Overfitting Problem with Logistic Regression • Consider word t that only appears in one document d, and d is a positive document. Let w be its associated weight • Consider the derivative of l(Dtrain) with respect to w • w will be infinite !
Solution: Regularization • Regularized log-likelihood • Large weights small weights • Prevent weights from being too large • Small weights zero • Sparse weights
Why do We Need Sparse Solution? • Two types of solutions • Many non-zero weights but many of them are small • Only a small number of weights, and many of them are large • Occam’s Razor: the simpler the better • A simpler model that fits data unlikely to be coincidence • A complicated model that fit data might be coincidence • Smaller number of non-zero weights less amount of evidence to consider simpler model case 2 is preferred
Finding Optimal Solutions • Concave objective function • No local maximum • Many standard optimization algorithms work
Predication Errors Preventing weights from being too large Gradient Ascent • Maximize the log-likelihood by iteratively adjusting the parameters in small increments • In each iteration, we adjust w in the direction that increases the log-likelihood (toward the gradient)
Graphical Illustration No regularization case
When should Stop? • The gradient ascent learning method converges when there is no incentive to move the parameters in any particular direction: • In many cases, it can be very tricky • Small first order derivative close to the maximum point