Artificial Intelligence By Example
上QQ阅读APP看书,第一时间看更新

The McCulloch-Pitts neuron

The McCulloch-Pitts neuron dates back to 1943. It contains inputs, weights, and an activation function. This is precisely where you need to think like a machine and forget about human neuroscience brain approaches for this type of problem. Starting from Chapter 8, Revolutions Designed for Some Corporations and Disruptive Innovations Small to Large Companies, human cognition will be built on top of these models, but the foundations need to remain mathematical.

The following diagram shows the McCulloch-Pitts, neuron model.

This model contains a number of input x weights that are summed to either reach a threshold which will lead, once transformed, to y = 0, or 1 output. In this model, y will be calculated in a more complex way.

A Python-TensorFlow program, MCP.py will be used to illustrate the neuron.

When designing neurons, the computing performance needs to be taken into account. The following source code configures the threads. You can fine-tune your model according to your needs.

config = tf.ConfigProto(
inter_op_parallelism_threads=4,
intra_op_parallelism_threads=4
)

In the following source code, the placeholders that will contain the input values (x), the weights (w), and the bias (b) are initialized. A placeholder is not just a variable that you can declare and use later when necessary. It represents the structure of your graph:

x = tf.placeholder(tf.float32, shape=(1, 6), name='x')
w = tf.placeholder(tf.float32, shape=(6, 1), name='w')
b = tf.placeholder(tf.float32, shape=(1), name='b')

In the original McCulloch-Pitts artificial neuron, the inputs (x) were multiplied by the following weights:

The mathematical function becomes a one-line code with a logistic activation function (sigmoid), which will be explained in the second part of the chapter. Bias (b) has been added, which makes this neuron format useful even today shown as follows.

y = tf.matmul(x, w) + b
s = tf.nn.sigmoid(y)

Before starting a session, the McCulloch-Pitts neuron (1943) needs an operator to directly set its weights. That is the main difference between the McCulloch-Pitts neuron and the perceptron (1957), which is the model of modern deep learning neurons. The perceptron optimizes its weights through optimizing processes. Chapter 4Become an Unconventional Innovator, describes the modern perceptron.

The weights are now provided, and so are the quantities for each x stored at l1, one of the locations of the warehouse:

 

The weight values will be pided by 100, to represent percentages in terms of 0 to 1 values of warehouse flows in a given location. The following code deals with the choice of one location, l1 only, its values, and parameters.

with tf.Session(config=config) as tfs:
tfs.run(tf.global_variables_initializer())

w_t = [[.1, .7, .75, .60, .20]]
x_1 = [[10, 2, 1., 6., 2.]]
b_1 = [1]
w_1 = np.transpose(w_t)

value = tfs.run(s,
feed_dict={
x: x_1,
w: w_1,
b: b_1
}
)
print ('value for threshold calculation',value)

The session starts; the weights (w_t) and the quantities (x_1) of the warehouse flow are entered. Bias is set to 1 in this model. w_1 is transposed to fit x_1. The placeholders are solicited with feed_dict, and the value of the neuron is calculated using the sigmoid function.

The program returns the following value.

print ('value for threshold calculation',value)
value for threshold calculation [[ 0.99971133]]

This value represents the activity of location l1 at a given date and a given time. The higher the value, the higher the probable saturation rate of this area. That means there is little space left for an AGV that would like to store products. That is why the reinforcement learning program for a warehouse is looking for the least loaded area for a given product in this model.

Each location has a probable availability:

= Availability = 1 - load

The probability of a load of a given storage point lies between 0 and 1.

High values of availability will be close to 1, and low probabilities will be close to 0 as shown in the following example:

>>>print ('Availability of lx',1-value)
Availability of lx [[ 0.00028867]]

For example, the load of l1 has a probable load of 0.99 and its probable availability is 0.002. The goal of the AGV is to search and find the closest and most available location to optimize its trajectories. l1 is obviously not a good candidate at that day and time. Load is a keyword in production activities as in the Amazon example in Chapter 12, Automated Planning and Scheduling.

When all of the six locations' availabilities has been calculated by the McCulloch-Pitts neuron—each with its respective x quantity inputs, weights, and biasa location vector of the results of this system will be produced. This means that the program needs to be implemented to run all six locations and not just one location:

A(L) = {a(l1),a(l2),a(l3),a(l4),a(l5),a(l6)} 

The availability (1 - output value of the neuron) constitutes a six-line vector. The following vector will be obtained by running the previous sample code on all six locations.

lv is the vector containing the value of each location for a given AGV to choose from. The values in the vector represent availability. 0.0002 means little availability. 0.9 means high availability. Once the choice is made, the reinforcement learning program presented in the first chapter will optimize the AGV's trajectory to get to this specific warehouse location.

The lv is the result of the weighing function of six potential locations for the AGV. It is also a vector of transformed inputs.