Overview
This is an interactive neural network demonstrator that learns boolean logic functions. Enter a boolean expression, configure the network, and watch it learn through backpropagation in real-time.
Quick Start
- Enter a boolean expression (e.g.,
A ^ B for XOR)
- Adjust network settings if needed (hidden layers, neurons, learning rate)
- Click Train to start learning
- Watch the network learn - training stops automatically at 95% accuracy
- Use the test toggles to verify the network's predictions
Boolean Expressions
Use JavaScript syntax with variables A through H (up to 8 inputs):
Operators:
&& (AND) |
|| (OR) |
^ (XOR) |
! (NOT)
Examples:
A && B → Both must be true
A || B → At least one true
A ^ B → Exactly one true (XOR)
!(A && B) → NAND (not both)
A && B && C → All three must be true
Network Settings
- Hidden Layers: Number of layers between input and output (0-3). 0 = direct connection (linear only), 1+ = non-linear patterns
- Neurons per Layer: Each hidden layer can have 2-12 neurons. More neurons = more capacity but slower training
- Learning Rate: How fast the network learns (0.01-2.0). Higher = faster but may be unstable, lower = slower but more stable. Default: 0.8
Tip: Simple functions (AND, OR) work with 0 hidden layers. Complex functions (XOR) need at least 1 hidden layer with 2+ neurons.
Visualization Guide
- Neurons (circles): Brightness shows activation level. Dark = low (near 0), Bright = high (near 1)
- Numbers: Exact activation value displayed (0.00 to 1.00)
- Connections (lines): Green = positive weight, Red = negative weight. Thicker lines = stronger weights
- Small dots: Bias indicators - Green = positive bias, Red = negative bias
- Input labels: A, B, C... H shown on the left side of input neurons
- Hover: Move your mouse over neurons or connections to see weight/bias values in a tooltip
- Click to Edit: Click on any neuron to edit its bias (or activation for input neurons). Click on connections to edit weights
The visualization updates in real-time during training, showing how weights and activations change as the network learns. You can manually adjust weights and biases by clicking on them - the network will recalculate activations automatically.
Neural Network Architecture
A neural network consists of layers of interconnected neurons:
- Input Layer: Receives the input data (e.g., boolean values A, B, C...)
- Hidden Layers: Process the information through weighted connections. Each hidden layer can have multiple neurons that learn different features
- Output Layer: Produces the final prediction (0 or 1 for boolean logic)
Each connection between neurons has a weight that determines how much influence one neuron has on another. Each neuron (except inputs) also has a bias that shifts its activation threshold.
Activation Function: Sigmoid
This network uses the sigmoid activation function, which maps any real number to a value between 0 and 1:
Sigmoid Formula:
σ(x) = 1 / (1 + e^(-x))
Properties:
- Smooth, differentiable curve (essential for backpropagation)
- Output range: (0, 1) - perfect for binary classification
- Large negative values → output near 0 (inactive neuron)
- Large positive values → output near 1 (active neuron)
- Value of 0 → output of 0.5 (neutral, middle of curve)
- Steepest gradient at x = 0, flattens at extremes
The sigmoid function introduces non-linearity, allowing the network to learn complex patterns. Without it, multiple layers would be equivalent to a single layer (linear transformation).
Forward Pass
During the forward pass, data flows from input to output:
For each neuron:
1. Calculate weighted sum: z = Σ(weight_i × input_i) + bias
2. Apply activation: output = σ(z) = 1 / (1 + e^(-z))
Step-by-step:
- Input values are fed to the first layer
- Each neuron in the next layer receives inputs from all neurons in the previous layer
- Each neuron multiplies inputs by their weights, sums them, adds bias
- The sum is passed through the sigmoid function
- This output becomes input for the next layer
- Process repeats until reaching the output layer
The forward pass produces a prediction, but initially the weights are random, so predictions are poor.
Backpropagation Learning Algorithm
Backpropagation is how neural networks learn. It uses gradient descent to minimize error:
1. Loss Calculation:
We use Mean Squared Error (MSE) to measure how wrong the prediction is:
Loss Formula:
Loss = (target - output)²
2. Error Propagation:
The algorithm calculates how much each weight contributed to the error:
- Output Layer: Error = (target - output) × σ'(output)
- Hidden Layers: Error propagates backward, weighted by connection strengths
- The derivative of sigmoid, σ'(x) = x × (1 - x), determines how sensitive the output is to changes
3. Weight Updates:
Weights are adjusted to reduce error:
Update Rule:
new_weight = old_weight + learning_rate × error × input
- Learning Rate: Controls step size. Too high → overshoots, too low → slow learning
- Weights connected to larger errors get bigger adjustments
- Biases are updated similarly: new_bias = old_bias + learning_rate × error
4. Iteration:
This process repeats for all training examples, gradually reducing error. Each complete pass through all examples is called an epoch.
Weight Initialization
This network uses Xavier/Glorot initialization:
Initialization Formula:
weight = random(-1, 1) × √(2 / (inputs + outputs))
bias = random(-0.1, 0.1)
Why this matters:
- Prevents weights from being too large (causes saturation) or too small (causes slow learning)
- Scales weights based on layer size to maintain signal variance
- Helps the network learn faster and more reliably
Why Neural Networks Work
Neural networks can approximate any continuous function (Universal Approximation Theorem):
- Hidden layers create non-linear combinations of inputs
- Multiple neurons allow learning different features simultaneously
- Backpropagation finds the right combination of weights through optimization
- For boolean logic, the network learns decision boundaries that separate true from false
Example (XOR): XOR cannot be learned by a single layer (it's not linearly separable). A hidden layer with 2+ neurons creates curved decision boundaries that can separate the XOR pattern.
Interactive Features
- Edit Weights: Click on any connection line to edit its weight value
- Edit Biases: Click on any hidden or output neuron to edit its bias
- Edit Input Activations: Click on input neurons to manually set activation (will be overwritten when test inputs change)
- Hover Tooltips: Hover over neurons or connections to see their current values
- Test Toggles: Use the toggle switches in the Test section to manually test different input combinations
Tips
- If training doesn't improve, click Reset to get new random weights
- Complex expressions may require more hidden layers or neurons
- If training is unstable (loss jumps around), try lowering the learning rate
- Use the test toggles to verify the network works with all input combinations
- The network initializes with random weights each time you reset or change architecture
- Training automatically stops when loss < 0.01 and accuracy ≥ 95%
- You can manually edit weights/biases during or after training to experiment with the network