20.20 Stated, or Stateful Machines

I have mentioned several stateful machines already. One was the rock and lever, when you include the possibility of the rock slipping off the lever in the model.

The slipped rock is a third output state. In the slipped state, pushing the lever down or letting it up results in the same rock slipped state. You need a third input, moving the rock back on the lever (or the lever under the rock) to return to a state where pushing the lever down will raise the rock and letting it up will let the rock back down.

(I am deliberately not calling this class the class of state machines, because state machine is a technical term with engineering and mathematical semantics that I want to avoid dealing with at this point.)

Another example of a stateful machine would be a push-on, push-off switch. There is only one input, push. After you push the button, the output state is the opposite of what it was before. (Unless you carefully refrain from pushing it past the input threshold, but, if you want to include a half-push as an input, you have to add it to your model.)

In order to invert the output state, either the switch has to have an internal state that it can invert, or it has to feed the output back into the transform. Either way, there is a point in time when the input passes the threshold and the new state is derived from the previous state.

A Simple Description:

Mathmeticians and engineers have a little more to say about things here, and push-on push-off buttons actually sort of straddle the boundary line between two levels of complexity, but for my purposes here, a stated, or stateful, machine has the following characteristics:

  • There are timing dependencies, defined moments when the state changes, and

  • the output is dependent on things other than inputs, either

    1. internally tracked state, or

    2. the state of certain outputs in the previous state.

Actually, I am not aware of a reason to distinguish between the dependency on internal state and the dependency on output feedback. Those are implementation details and have no effect on the function of the abstract machine unless you change the definition of the abstract machine.

More Simple Examples:

If a push-on, push-off switch is stateful, so is a pull-string switch: pull-on, pull-off is two states; pull-high, pull-medium, pull-low, pull-off is four. The input is the same, something inside the switch remembers what state the switch is in, and something inside the switch transforms the combination of the current state and the string-pull input to the next state.

If you have a relatively recent washing machine with a push-button control panel, well, that's clearly a stateful machine as I am describing them. Hmm. I'd mention calculators, but I'd be jumping the gun, so let's ignore those for a moment. What else?

Have you ever seen one of those older component stereos or PA amplifiers with rotary switches? The sound channel switch, for instance, might rotate through positions for no speaker output, left channel mono, right channel mono, stereo, and stereo reversed. These can be viewed as either stateful or stateless.

In the state-less view, each position is an independent input, and the position input directly determines the output state. (See why I said it would be silly to say that a state-less machine has no states? What it has is no states that serve in lieu of memory.)

A problem with this point of view is that the state-free description leaves us with no convenient way to describe inherent constraints on the inputs, not being able to go from off to reverse without passing through other output states. If you need to account for those limits, you end up describing the switch as a state-less machine that has a state-full input machine or some such. (There may be reasons for that, but you usually don't want to go to the trouble.)

In the state-ful view, there are two inputs: rotate left, and rotate right. And the positions are the states. And the rotor post and mechanism plus the wiring do the work of transforming the input and current state into the next state. Usually, this is the preferred view, because the rotary switch can be mapped directly to the paths between states.

Combinatorial Processing:

You might notice here that state-ful machines are inherently combinatorial. If so, you would also want to remember that combinatorial processing does not imply state-ful-ness, that there can definitely be combinatorial processing in state-less machines.

Let's look at the combinatorial descriptions of the above simple examples. First the push-on, push-off switch:

The pull-on, pull-off switch is basically the same, just substitute pull for push.

Note that, as with state-less machines, the labels we use are quite arbitrary, chosen for our convenience in understanding. If the target audience might not understand inverse, for instance, we might say opposite. Or, if the table is intended to be read directly by a program for automagically producing software (or programming gate arrays) to simulate the switch, we might prefer a more (human) cryptic "not current-state" because that's often the exact language the programming tools use (and easier for the machine to "understand").

And we could also use legalese to make it harder for engineers to read and easier for lawyers to think they understand, if that is our purpose.

The four-level pull string switch is very similar:

Something to pay particular attention to here is that the processing description in technical descriptions is also rather arbitrary. In end-user documentation it may be described one way, and it may actually be implemented in an entirely different way that has (close enough to) equivalent results.

If such a switch were implemented mechanically, there might be something like a four position trough with a spring loaded follower, or perhaps a geneva drive. The programming is in the shape of the troughs and the construction of the follower, and the location and construction of the switch contacts and insulators.

If the switch were implemented with a microprocessor or in programmable logic, the symbolic description fed to the program compiler might look like the above table (with a lot of semantic decoration), or it might look more like the following procedural code (C language, using the pre-processor):

#define off 0

#define low 1

#define medium 2

#define high 3

#define state_count 4

extern int current_state;

#define PROCESS_STATE current_state = ( ( current_state + 1 ) % state_count )

where the % symbol is the programming language's symbol for modular division.

It's important to note here that the optimizers that are generally part of the development toolset for both microprocessors and programmable logic could well convert the invocations of the PROCESS_STATE macro to the equivalent of the table form. Or the tools might convert the table form to something that looks more like invocations of the macro.

The sound channel rotary switch example is a little more complicated:

Again, the table above allows for many kinds of implementations.

Oh, yeah, I don't want to forget the rock and lever:

As far as implementation goes, well, we might think it fun to implement a virtual version, with graphics and mouse clicks, using Java or Ecmascript or some other programming language. but I don't imagine anyone would want to do it in programmable logic.

Oh, who knows. Maybe someone wants to build a dedicated portable game machine version, like those Pokemon game gadgets that my daughter wants one of.

Heh.

Formulae:

So, you might be wondering, what would a state-ful formula look like? Can you think of a function where input is dependent on output? No? That wouldn't be a function? You're right.

So, relations, maybe?

X2 - Y2 = 4

No, this is just a pair of functions, or you could say the machine has a pair of outputs:

Y1 = ( 4 - X2 )1/2

Y2 = - ( 4 - X2 )1/2

or something like that.

State-ful Equations (Numerical Analysis):

Launch a projectile. In the ideal, the flight follows a parabolic curve. The parabolic curve is a simple function, something such as

xt = x0 + vx0t

yt = y0 + vy0t - (1/2)gt2

very roughly speaking. I have to admit, I'm not extremely confident of my memory of physics here, and the notation is definitely non-standard (fudging a bit because I don't want to spend a lot of time experimenting with the interaction of Google Docs and HTML).

(I really should spend some time looking up the standard notation and polishing this up -- and checking the math, heh -- but the points I want to make should be visible above.)

(Take note of the initial positions (x0, y0) and initial velocities (vx0, vy0) in the formulae above.)

In the ideal, with zero or constant air friction, the formula above is state-less. Even though initial velocity and position are part of the equation, the output is strictly a function of time.

But if you want to draw a parabola on a computer screen, you may choose to use numerical methods that compute the position and velocity at each point in time in terms of the position and velocity at the previous point in time. The method here would be state-ful (and more subject to round-off error).

And if you want to factor in real wind, the numerical methods approach is all you have, because of the variableness of the wind, and the position at each point in time actually does become a function of the previous position and the wind at that position and point in time. Very roughly,

xt1 = xt0 + vxt0δt + (1/2)axt0δt2

yt1 = yt0 + vt0δt + (1/2)( ayt0 - g )δt2

I'm probably screwing up the physics. I'll have to come back and check when my memory has had a little time to work and I have had some time to dig back into the numerical methods a bit. But you see that the calculation of the next point is dependent on the position and velocity at the previous point (input at the start of the evaluation and continuously recalculated), as well as the wind (which becomes another input).

The next state is dependent on the combination of the previous state and some input.

Revisiting the Electronic Amp

We talked about simple electronic amplifiers in the previous section. More advanced amplifiers use feedback circuitry to reduce noise and make the amplifier more stable. The design has to be done carefully, or the feedback produces more noise and/or less stability.

In some cases, the noise and semi-unstable effects are sold as features, such as reverb and certain kinds of fuzz in the audio field. We can produce formulae to describe these special noise effects, but they can be rather convoluted.

But the output can also be made more linear, with feedback used to correct some of the idiosyncrasies of the solid state physics of real devices. Sometimes, we use stateful technology to produce physical devices which adhere more closely to a stateless ideal, which is a topic we may need to return to. The mathematical formulae for describing feedback controlled amplifiers involves feedback as well, and is stateful at the internal level, even though the output of the amp as a whole is (or may be) stateless.

So there are formulae that describe state-ful machines. In fact, if you think about it, in the real world, everything is state-ful, and if you think again about that, well, of course it is.

These Are Still Models

Nonetheless, what I have presented above are still only descriptions and mathematical models of actual machines. I have again referred to some real machines, but what I have presented here are only somewhat descriptive models. The caveats concerning mathematical models which I pointed out in that section on state-less machines apply here as well. Models are not real machines.

It may help to revisit the discussion on electronic amplifiers in the previous section. Designs are not real machines, even though real machines can be considered as projections on the real world of instances of their designs.

The question that some may want to ask is whether more complex models can approach the status of real machines. So, let us look at some more complex machines and models.

Copyright 2011 Joel Matthew Rees