Understanding Multiple Inputs in Neural Networks (With Python Examples) — Part 2

In the previous article, we plotted a surface using inputs related to Petal Width and Sepal Width to predict whether the species is Setosa or not.

Let’s continue with the remaining steps.

After plotting the bent surface, we need to multiply all the points by -0.1.

import numpy as np
import matplotlib.pyplot as plt

# Create a grid of dots across the entire bottom plane
p_dots = np.linspace(0, 1, 6)
s_dots = np.linspace(0, 1, 6)
P_grid, S_grid = np.meshgrid(p_dots, s_dots)

# Calculate surface values
Y_grid_dots = np.maximum(
    0,
    (P_grid * -2.5) + (S_grid * -0.6) + 1.6
)

# Scale the bent surface
Y_grid_dots = Y_grid_dots * -0.1

fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')

ax.plot_surface(
    P_grid,
    S_grid,
    Y_grid_dots,
    alpha=0.7
)

ax.set_xlabel('Petal Width')
ax.set_ylabel('Sepal Width')
ax.set_zlabel('Setosa Output')

plt.show()

Now we do the exact same thing for the bottom node in the hidden layer.

We apply ReLU to this and then multiply the result by 1.5 to get the final orange bent surface.

import numpy as np
import matplotlib.pyplot as plt

# Create a grid of dots across the entire bottom plane
p_dots = np.linspace(0, 1, 6)
s_dots = np.linspace(0, 1, 6)
P_grid, S_grid = np.meshgrid(p_dots, s_dots)

# Bottom hidden node calculation
Y_grid_dots = np.maximum(
    0,
    (P_grid * -1.5) + (S_grid * 0.4) + 0.7
)

# Apply output weight
Y_grid_dots = Y_grid_dots * 1.5

fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')

ax.plot_surface(
    P_grid,
    S_grid,
    Y_grid_dots,
    color='orange',
    alpha=0.7
)

ax.set_xlabel('Petal Width')
ax.set_ylabel('Sepal Width')
ax.set_zlabel('Setosa Output')

plt.show()

Now we add the y-axis coordinates of the blue bent surface to the y-axis coordinates of the orange bent surface.

We do this for every single point, and we end up with this green surface.

import numpy as np
import matplotlib.pyplot as plt

# Create a grid of dots across the entire bottom plane
p_dots = np.linspace(0, 1, 6)
s_dots = np.linspace(0, 1, 6)
P_grid, S_grid = np.meshgrid(p_dots, s_dots)

# ----- Blue surface (top hidden node) -----
Y_blue = np.maximum(
    0,
    (P_grid * -2.5) + (S_grid * -0.6) + 1.6
)
Y_blue = Y_blue * -0.1

# ----- Orange surface (bottom hidden node) -----
Y_orange = np.maximum(
    0,
    (P_grid * -1.5) + (S_grid * 0.4) + 0.7
)
Y_orange = Y_orange * 1.5

# ----- Final output surface (sum) -----
Y_final = Y_blue + Y_orange

fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')

ax.plot_surface(
    P_grid,
    S_grid,
    Y_final,
    color='green',
    alpha=0.7
)

ax.set_xlabel('Petal Width')
ax.set_ylabel('Sepal Width')
ax.set_zlabel('Setosa Output')

plt.show()

At the end, we need to add the bias, which is 0.
This does not change the green surface, so this is the final output for Setosa.

From this, we can understand that the value for Setosa is highest when the petal width is close to 0.

Similarly, the value for Setosa is lowest when the petal width is close to 1.

In the next article, we will explore how to use this neural network to predict Setosa, Versicolor, and Virginica.

You can try the examples out in the Colab notebook.

Looking for an easier way to install tools, libraries, or entire repositories?
Try Installerpedia: a community-driven, structured installation platform that lets you install almost anything with minimal hassle and clear, reliable guidance.

Just run:

ipm install repo-name

… and you’re done! 🚀

Installerpedia Screenshot

🔗 Explore Installerpedia here

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

The Context Window Paradox: Why Bigger Isn’t Always Better in AI

Related Posts