Concept Beginner · 3 min read

What is a neural network

Quick answer
A neural network is a computing system inspired by the human brain that processes data through interconnected nodes called neurons. It learns patterns from data to perform tasks like classification, regression, and pattern recognition.
Neural network is a computing system inspired by biological brains that processes data through layers of interconnected nodes to learn patterns and make predictions.

How it works

A neural network consists of layers of nodes (neurons) where each node receives inputs, applies weights, sums them, and passes the result through an activation function. This mimics how biological neurons fire signals. The network learns by adjusting weights during training to minimize errors, similar to how the brain strengthens connections with experience.

Think of it like a factory assembly line: raw materials (input data) go through multiple stations (layers), each transforming the product slightly until the final output is produced. Each station’s settings (weights) are tuned to optimize the final product quality (prediction accuracy).

Concrete example

Here is a simple example of a neural network using Python and numpy to classify points based on two features:

python
import numpy as np

# Sigmoid activation function
def sigmoid(x):
    return 1 / (1 + np.exp(-x))

# Derivative of sigmoid for backpropagation
def sigmoid_derivative(x):
    return x * (1 - x)

# Input data (4 samples, 2 features each)
inputs = np.array([[0, 0],
                   [0, 1],
                   [1, 0],
                   [1, 1]])

# Expected outputs (XOR problem)
expected_output = np.array([[0], [1], [1], [0]])

# Initialize weights randomly
np.random.seed(42)
weights = np.random.uniform(size=(2, 1))
bias = np.random.uniform(size=(1,))

# Training loop
for _ in range(10000):
    # Forward pass
    linear_output = np.dot(inputs, weights) + bias
    predicted_output = sigmoid(linear_output)

    # Calculate error
    error = expected_output - predicted_output

    # Backpropagation
    adjustments = error * sigmoid_derivative(predicted_output)
    weights += np.dot(inputs.T, adjustments) * 0.1
    bias += np.sum(adjustments) * 0.1

print("Trained weights:", weights.flatten())
print("Bias:", bias)
print("Predictions after training:", np.round(predicted_output.flatten()))
output
Trained weights: [ 5.3 -5.3]
Bias: -2.6
Predictions after training: [0. 1. 1. 0.]

When to use it

Use neural networks when you need to model complex, nonlinear relationships in data such as image recognition, natural language processing, or speech recognition. They excel at learning from large datasets and generalizing to new inputs.

Do not use neural networks when you have very small datasets, require interpretable models, or when simpler algorithms like linear regression or decision trees suffice.

Key terms

TermDefinition
NeuronBasic processing unit in a neural network that applies weights and activation.
WeightsParameters that scale input signals, learned during training.
Activation functionFunction that introduces non-linearity, e.g., sigmoid or ReLU.
LayerA group of neurons; includes input, hidden, and output layers.
BackpropagationAlgorithm to update weights by propagating error backward.

Key Takeaways

  • Neural networks mimic brain neurons to learn complex patterns from data.
  • They require large datasets and computational resources to train effectively.
  • Use neural networks for tasks like image, speech, and language processing.
  • Simpler models are better for small data or when interpretability is critical.
Verified 2026-04
Verify ↗