🎯

Real Interview Question

⏳ Pending
🏢 Company:Unknown · ML Engineer
📅 Date:12/1/2024
🎯 Real InterviewEasyVerifiedML CodingCoding Ready

Linear Regression Implementation

machine-learninglinear-regressioncodingnumpy
Real Interview
Updated Dec 20, 2025

Question

Implement linear regression from scratch using gradient descent.

Given:

  • Training data: X (n × m matrix) and y (n × 1 vector)
  • X contains n samples with m features
  • y contains n target values

Implement a LinearRegression class with:

  • fit(X, y, learning_rate, num_iterations) - train the model
  • predict(X) - make predictions
  • Calculate and use gradient descent to optimize weights

Example:

# Training data
X = np.array([[1], [2], [3], [4], [5]])
y = np.array([2, 4, 6, 8, 10])

# Train model
model = LinearRegression()
model.fit(X, y, learning_rate=0.01, num_iterations=1000)

# Make predictions
predictions = model.predict(np.array([[6], [7]]))
# Expected: [12, 14] (approximately)

Hints

Hint 1

Linear regression follows the formula: ŷ = Xw + b, where w is weights and b is bias. How do you initialize these parameters?

Hint 2

The loss function is Mean Squared Error: L = (1/n) × Σ(yi - ŷi)². You need to compute gradients of this with respect to w and b.

Hint 3

Gradient descent update rule: w = w - α × (∂L/∂w), where α is learning rate. The gradient ∂L/∂w = -(2/n) × X^T × (y - ŷ).


Your Solution

python
Auto-saves every 30s

Try solving the problem first before viewing the solution


Learning Resources

Related Problems

  • Logistic Regression Implementation
  • Ridge Regression from Scratch
  • Polynomial Regression
  • Batch vs Stochastic Gradient Descent
0:00time spent