Three interesting ideas from Herbert Simon’s “The Sciences of the Artificial”
July 13, 2010
At one point this book probably would have blown my mind (in fact, that’s the reason I picked it up) – unfortunately, most of the individual content I’d seen so many times in so many other places that it failed to make much of an impression. There’s only so many times you can retread the familiar themes cognitive science, the computational mind, evolution, artificial intelligence, control theory, and rationality. I blame LessWrong. Despite this, Simon is still able to pull some fresh ideas from the well-worn territory.*
1) Inner vs. Outer Environments
We can view the matter quite symmetrically. An artifact can be thought of as a meeting point – an “interface” in today’s terms – between an “inner” environment, the substance and organization of the artifact itself, and an “outer” environment, the surroundings in which it operates. If the inner environment is appropriate to the outer environment, or vice versa, the artifact will serve its intended purpose.
There’s two interesting ideas wrapped up in this: first, how you conceptualize something largely depends on where you draw it’s boundary. Simon makes this explicit by defining an artifact as the boundary itself, an interface where information and material from the outer and inner environments are exchanged.
The second is that, a stimulus to an artifact may only give you information about it’s outer environment, and nothing about the artifact itself. Press your thumb into the side of a water balloon and it will deform inward, but the shape of the deformation won’t teach you anything about the interior structure of the water balloon – it’ll only tell you about the structure of your thumb (the stimulus). You only learn about the interior structure of an artifact when it FAILS to respond to the environment properly. This is why the study of human error is so valuable in understanding how the brain works.
A bridge, under its usual conditions of service, behaves simiply as a relatively smooth level surface on which vehicles can move. Only when it has been overloaded do we learn the physical properties of the materials from which it is built.
2) Realistic satisficing vs ideal optimizing.
It’s all well and good to talk about optimality, but unfortunately, optimality is computationally intractable. (Cue discussion of more moves in a game of chess than there are atoms in the universe.) In the real world, agents are forced to choose between making guesses that are merely ‘good enough’ – satisficing – or optimizing over greatly simplified models.
To permit computers to find optimal solutions with reasonable expenditures of effort when there are hundreds of thousands of variables,the powerful algorithms associated with [optimization] impose a strong mathematical structure on the design problem. Their power is bought at the cost of shaping and squeezing the real-world problem to fit their computational requirements: for example, replacing the real-world criterion function and constraints with linear approximations so that linear programming can be used…[Satisficing] methods can handle combinatorial problems (e.g., factory scheduling problems) that are beyond the capacities of [optimization] methods, even with the largest computers…[Satisficing] methods also are not limited, as most [optimization] methods are, to situations that can be expressed quantitatively. They extend to all situations that can be represented symbolically.
3) Design as a continuous process.
As humans, its our lot to never be satisfied with anything. Our desire to shape the world keeps technological progress marching forward. The hedonic treadmill always keeps us wanting more. No endeavor we undertake will ever really be complete. Ultimately, it will provide the jumping off point for something coming after it. Simon not only nails this, but puts it center stage:
A paradoxical, but perhaps realistic, view of design goals is that their function is to motivate activity which will in turn generate new goals.
This aesthetic is at odds with what I suspect is a built-in desire to complete things, but what sort of results are achieved when this is made axiomatic? Clearly this can’t be too far from the constant iteration that seems necessary for business success (especially startups – see “minimum viable product”, “pivoting”, etc.), the unending cycle of hypothesis, test, and revision that is modern science, and (perhaps) the Bayesian process of updating on evidence.
There’s lots more to like about this book (especially the last chapter on complexity, which I may do a write up of later), but those are a few ideas that stuck out on my first pass through.
*This isn’t quite fair, as the book was first written in the late 60’s, when most of these ideas would have been bleeding edge – it beats Godel, Escher, and Bach by about 10 years (which I’d consider to be a very similar book, and which DID blow my mind the first time I read it).