Previous post:


As I’m fond of mentioning, we can learn a lot about something by comparing it to the simplest possible system that still exhibits it’s properties. Previously I examined human behavior by comparing it to the behavior of an agent with basic properties. Now I want to examine how humans perform design by comparing it to the simplest possible activity that still fulfills our intuitions about what “design” is. So what would that activity be like?

A Goal

Design takes place in service of an attempt to modify the environment in accordance with an agent’s preferences. Though designs as humans perform it is often an attempt at fulfilling many simultaneous goals (or one goal with many constraints), a single goal is all that’s required. This goal need not be complicated, and can (theoretically) be instantiated many different ways – as an error signal, a decision tree node, or software control flow point (if statement).

A Variety of Possible Solutions

A design is a structure that modifies the environment to allow the agent complete its goal (reduce its error signal). This structure has many different attributes – high level relationships between components which are independent of underlying, finer-grained attributes. However, only some of these attributes will be relevant for the satisfying the goal. Consider a bridge crossing a chasm. It may be made of concrete, wood, steel, carbon fiber, etc, but the specific building material is not relevant to whether it’s a successful bridge or not. It merely needs to have the proper length, width, and load capacity to be successful – the rest of its attributes are unimportant. So another important aspect of design is that there are many possible ‘solutions’, structures with the necessary high-level attribute. The agents’ (or humans’) goal is to find one of these solutions in a sea of structures that lack them.

Directed Solution Search

For it to be ‘design’, this search needs to be somewhat directed – simply trying all possible combinations of objects until one has the necessary structure isn’t design. So design is a directed search process.

Limited Information/Computation

I mentioned previously that this may be a quirk of how humans engage in design rather than a fundamental property of it. Nevertheless, in order to duplicate many of the features that are inherent to design as we know it (in essence, anything related to design as an iterative process), the design agent must not be omniscient with respect to it’s environment. This may include imperfect knowledge of the environments laws of physics, limited memory of previous events, limited knowledge of it’s own preferences, and limited ability so simulate the effects of an operator (taking an action). Humans possess all these limitations, and they are the primary reason our design takes place as a directed trial-and-error process. Furthermore, any agent that exists in the ‘real’ world and not deliberately created in a simulated environment will necessary have these limitations as well.


The above process takes place, roughly, by transforming the goal into a necessary attribute of a structure, and then decomposing that goal into subgoals/subattributes and then satisfying them using a combination of perceptions, information about the world, and own abilities (operators). If that sounds complicated, well, it is.

Consider a software agent that exists in a ‘toy’ environment. The agents’ goal is to reach a certain point in this environment, but it’s path is completely blocked by a chasm. Surrounding the agent are various objects. If the agent ‘wants’ to cross the chasm, it must combine these objects into a bridge and place it across it – no object by itself is large enough to be used as a bridge.

Lets say the agent has two different abilities it can use to combine objects – brelding and brapling. The agent must breld or braple objects together in order to make a structure that is longer than the chasm is wide, and place it across it to reach the other side. However, these objects come in a variety of different forms, and respond differently based on which objects are being joined, and whether they are being brelded or brapled. Red circles must be brelded to other circles, but can be brapled to anything. Pink squares can be brelded to anything larger than them and brapled to anything smaller. Because of these ‘laws of physics’, only a few structures exist that have the necessary attribute – length – to be used as a bridge. The agent must find one of these structures to satisfy it’s goal, and starts out without knowing the specific rules of combination – how will it proceed?

It might start by searching for the largest object it can. Once it has it, it might compare its length to the attribute length that’s necessary. Finding it too short (getting an error signal), it will then take actions to try to make it longer (reduce this error signal). It might try brelding or brapling different objects to it – perhaps by random choosing, perhaps by some sort of decision tree. If one works – say “pink square brelded to orange circle”, it will make a note of the action and the attributes of the combined objects, and then attempt to use that specific action again when increasing the length – either by amending its decision tree or having a list of “known actions” to try before simply choosing randomly. In this way, it will build up a list of ‘design’ rules based on the attributes of the objects, and be able to instantly construct the necessary bridge to cross the chasm.

This situation APPEARS to satisfy many of our intuitions about what it means to design. It’s important to note now what this DOESN’T include. The agent doesn’t possess any “understanding” or “intelligence”. It doesn’t need to know what the chasm is – it may very well exist only as an ‘if’ statement in it’s control flow. It doesn’t include high-level abilities to simulate this environment, to predict what happens when it uses its bridge. What it DOES include is the criterion listed above – an error signal attempting to be eliminated, a variety of possible structures that will serve to eliminated it, a directed search through the space of all possible structures, and limited information about the environment.

There’s quite a few different computational methods that are capable of “automatically” generating designs. The most famous of these are probably genetic algorithms, but there are other similar methods (such as particle swarm optimization), and some extremely different and clever methods based on things like matrix algebra or task analysis. They work by essentially taking some representation of the design, performing a series of operations on it to transform it, evaluating it, and then repeating until its deemed successful enough. I put automatically in quotes because methods are largely dumb processes – the methods have no intelligence or insight to speak of, and are merely reflecting knowledge of the problem that already exists in the designers head. This knowledge is represented in two key places. One is the evaluation function which determines whether a design is good or not. The other is the way the designs are encoded so the software can manipulate them.

(When I say “encoding”, I mean the conceptual level that the design is created at – a piece of software, for example, is going to wind up as a long string of 1’s and 0’s, but the conceptual level of encoding is at the level of data structures and algorithms that the software implements.)

Encoding is extremely important, as it determines what the software is actually capable of generating. For example, consider a genetic algorithm where a design is encoded as a string of 10 binary digits – [0 0 0 0 0 0 0 0 0 0]. Because this encoding only allows for 1024 different values, there are a total of 1024 possible designs for the algorithm to evaluate. The encoding thus defines the state space being searched through for a solution.

The encoding ALSO defines the sorts of transformations that can be applied to design. If your design is encoded as a verbal description, you can’t perform arithmetic operations on it. If it’s encoded as a matrix representation, you can’t simply say “move the bevel gear 3 inches to the left”. So in addition to defining the possible states a design can have, the encoding defines the paths through state-space that you can traverse.

Essentially, the encoding of a design determines the sorts of solutions you’re looking for, and how those solutions will vary. As algorithmic design is still in it’s infancy, most encoding methods are extremely primitive, searching through a comparatively tiny state-space. Comparing this to the actual task of designing is akin to the difference between searching for treasure in a sandbox vs the Sahara. While not completely solving the problem, knowing where to look comprises much of it’s heavy lifting. It merely seems simple because our processes of conceptualization, categorization, and analogical reasoning (to name but a few of the processes we use while designing) all happen automatically below the level of conscious awareness.

Performing more complex design tasks requires more complex encoding – complexity means many finely tuned parameters which means a large number of possible states. As I’m so fond of bringing up, anything designed to be used by humans in the real world will have trillions upon trillions of possible states. The human brain is the only computer capable of finding solutions in such an enormous space – does the encoding it uses give a clue as to how it’s capable of this?

It might. But I suspect it’s probably more difficult than that (even though I believe encoding, and related issues, are at the heart of design as a “problem”).  Consider, for example, the inspiration for genetic algorithms, the genetic code itself. Our genes are encodes as long strings of base pairs – A, C, T, and G. Each group of 3 base-pairs codes for an amino acid, of which there are 20 in total (though there are 64 possible choices, some amino acides are encoded by multiple triplets, and some triplets do not encode for anything). So there are 20 possible “symbols” to choose from. Amino acids are put together serially in long chains, which then wrap up into complex structures to form proteins. These amino acids chains do not have a fixed length – they can be anywhere from just a few to links to many thousands. Its these proteins which perform the work of building and maintaining our body.

Thus, even though the encoding here has been mapped completely, we’re still a long way from being able to understand what it implies – how it’s able to build an organism. I suspect that, sort of like a “turing-complete” programming language, once an encoding scheme reaches a certain point it will be capable of representing anything (though possibly extremely inefficiently). However, also like a programming language, different encodings make certain things easier and certain things harder. For example, while it’s POSSIBLE to transform a verbal sentence “ONE PLUS ONE” into “TWO” using a purely linguistic ruleset, the operations that make up this “math” are going to be fantastically more complicated than the same operations performed on integers. Likewise, we may not know the details of how our bodies our built, but the cracking of the genetic code was a great leap in our understanding. Understanding how the brain encodes information (still largely an unsolved problem) would thus radically increase our understanding of design even while leaving much unsolved.

I’ve marked this as part I because I can’t seem to shake the idea that the idea of encoding is incredibly central, and that I’ve only grasped the outer edges of it.

The best thing about design research is that it allows you to draw on the insights from a huge number of different fields.

It is also the worst thing.

A powerful way of making progress is applying an old insight in a new fashion. The fact that “design” is so conceptually similar to so many activities suggests that there may be quite a bit of low hanging fruit obtained from simply examining other fields thoroughly. I’m not exactly a scientific insider, but it’s my impression that breadth of scholarship, as opposed to depth, is a somewhat rare find.

That same blessing, in another light, is a curse. Beyond the fact that it’s possible to lose yourself forever in doing research, beyond the fact that any “original” idea you might have has probably already been duplicated under different nomenclature, there’s the fact it becomes a great deal of work to determine what it is you’re actually studying!

What is “design” research anyway? Is it a form of problem solving? Studying creativity? Conceptual knowledge? Product development? AI systems? Team Cognition? Human-Computer Interaction? Engineering design? Embodied Cognition?

Strong cases can be made for all of these. Looking through cgpapers and you’ll find papers from every single one of these areas. And this isn’t exactly helped by the fact that the “field”, to the extent that one exists, is not exactly defined by rigor. Look through Design Studies, or one of the other major journals, and you’re as likely to find baseless conjecture as you are well-performed empiricism. Thus the field itself doesn’t provide any strong cues as to the sort of work you should be doing – you’re forced to figure it out for yourself.

This lack of cohesion, at least to me, suggests I would be best serve by returning to the roots of the field, applying myself to understand them fully, and then carefully reasoning fruitful next steps. Then and only then should I make use of the abundance of field-spanning literature available.

It’s frustrating, to be sure. But without this carefully laid conceptual groundwork, it’s all too easy to waste weeks, months, or years on inappropriate approaches.

Conceptual knowledge is our ideas about concepts – the idea of “car” apart from any specific car, for instance. Traditionally, it was believed that conceptual knowledge is stored in some amodal representation, and that the concept of “car” was completely separate from a visual stimulus of a particular “car”. Increasingly it’s appearing that this is not the case, and that conceptual representations are in fact based in modal specific areas such as vision, objection motion, etc. That when you think about a car moving, you’re not doing some sort of semantic processing that’s isolated from your senses – that you’re in fact re-enacting the stimulus of seeing a car move, using your visual cortex. Work needs to be done to create a theory of concepts (if it is indeed a valid theory) that’s rooted in modal processing.

In the language of CK theory, design is rooted in the creation of concepts, and the generation of knowledge from those concepts. The way that the human brain processes conceptual knowledge is thus extremely relevant. If we’re in fact using modal processing, that’s going essentially narrow the sorts of transformations that can occur in the generation of new concepts. Of course, so would a completely amodal processing system based purely in semantics, but this will do so in a particular way. Knowing how our conceptual reasoning works is a key component of knowing how humans design artifacts.


To actually understand something, it’s important to understand what occurs in the most general possible case. If you’re learning multiplication you may make progress by first learning that 2 x 2 = 4, then 2 x 3 = 6, then 2 x 4 = 8, but you won’t really understand it until you know what occurs when you have n x m, when you multiply any number times any other number. So to understand what design really is, it’s not enough to learn about design as it happens to apply to humans. We need to learn how to design for any possible thing that can be designed for.

In principle, we can design for anything that has a desire or goal about the state of the world, provided that the creation of an artifact is needed to satisfy it. We call things that have goals agents. In the real world agents tend to take the form of complex bundles of biology – humans, animals, and plants all have goals which they try to bring about through various means. But things like corporations, AI programs, and mechanical control systems can be thought of as agents as well.


All agents have a few basic parts:

Goal: An agent is fundamentally anything that can be said to want something. This want may be implicit in its behavior and merely a convenient description, rather than something explicitly stated – when I pull my hand back from a hot stove I can be said to ‘want’ to avoid burning my hand, even though my behavior is completely reflexive. Wanting can be thought of as having a simulated state of the world – a goal state – that fails to match with the current state of the world. An agent will try to change the world around it to match its goal state through the use of…

Operators: These are the atomic actions an agents is capable of taking which transform the world into a different state. So a software agent, for instance, might have the INC operator, which increments a given value by 1. If it sees a world state that consists of the value 2 and has a goal state that consists of the value 6, it can reach its goal by applying the INC operator 4 times. Human operators consist of the muscle tensions that we use to control various parts of our body, and the transformations we’re capable of performing on our mental representations (performing addition in our head, or remembering something, for instance). Higher-level operators can be constructed out of more basic ones – our INC operator can be used to construct the + operator, which lets you increment a given value by any amount. Likewise, complex sequences of muscle tensions can be grouped into the operators “walk” or “speak”. Agents will apply their operators until the goal state is reached, which is determined by…

Sensory input: An agent needs to have information about the world in order to accomplish its goals. It gets this information through sensors, which send a stream of data that (ideally) corresponds to some portion of the outside world. An agents information is only as good as its sensory inputs are – it is incapable of perceiving the world directly. Before an agent can make use of this sensory input, however, it needs…

Cognition: Some method of information processing to transform sensory input into operators as output. This may be as simple as a list of rules that selects an operator whenever a particular pattern is encountered, or it may be a complex assemblage like the human brain, which can choose between more and less desirable goals, make inferences about its environment, create complex sequences of operators, etc.

These parts give us a more or less complete description of the functioning of any goal-based (teleological if you’re fancy) entity – from simple things like viruses and bacteria to complex entities which exist as hierarchies of sub-agents, like businesses.


So lets say some agent is plunked down in the middle of an environment – what does it do? How do we describe its resulting behavior?

First, the agent will begin to receive information about its environment via sensory input. A complex agent will use this information to construct a model of its environment – simpler agents will have the model implicitly encoded into the rules it uses for operator selection (in actuality, many agents will include many different types of operator selection – humans unquestionably build models of the world but also have simpler, reflex-based rules for choosing actions.) Since an agent can only get information via its sensors, it can only ever act based on this world model – never the world directly.

The agent will compare this model of the world (which we’ll call the current world state) to some goal-state it has. For simple goals, the goal state completely defines how the goal state differs from the current state, and which operators can be used to reduce this difference. If my goal is to get a soda from the fridge, I know the final model has me in the same position holding a soda, and I know that achieving this involves me getting up, walking to the fridge, opening it, and walking back. The majority of our agents are based on simple goals, where we know the path from the current model to the goal-state.

Sometimes, however, the goal state may be completely defined, but it may require a nontrivial or non-obvious sequence of operators to reach it. To an agent, these are known as problems. If I go to the fridge and find it bolted shut due to a disgruntled roommate, I now have a problem – I have to figure out how I’m going to get the fridge open. The path from my current state to the goal state is no longer obvious – I have a few ideas of operators to select (prying the door open, say), but I’m incapable of fully predicting the particular sequence that will lead to me getting soda. I now require some strategy in operator selection, whereas the initial get-soda situation required no strategy – the path from the current state to the goal state could be fully predicted.

Then we have situations where we don’t know the path to the goal state, AND the goal state is insufficient in specifying a difference between the current world-state. These are typically situations where the goal state can be satisfied by a variety of dissimilar world-states. This may be because the goal is very abstract (“I want to be more successful”), or because the final world state is extremely complex (“I want to build a human-powered helicopter”). These goals are the most difficult – not only is the path to the goal state unclear, how far you are from the goal state is no longer obvious.

The goal states and world states of an agent combine to form a problem space. This space consists of all the possible world-states an agent can be in. The agents model (based on sensory input) defines its current world-state, and the goal defines a destination (or class of destination) world-states. The task of an agent is to navigate from point A, the current position, to point B, the goal position, using its operators to transform its current world-state.

However, as we’ve mentioned before, these problem spaces are truly enormous. An agent can’t merely draw a line from it’s current state to its goal state and follow it. For an agent to be successful, it must come equipped with a variety of measures for navigating its problem spaces. For one, agents never consider the entire current world-state, only the differences between it and their goal. And even then, the model doesn’t contain all possible paths from the current state to the goal state. A construction company won’t consider all possible materials to construct a building out of – it will limit itself to a few (at least initially), only expanding them if a path to the goal state cannot be found. This has the benefit of restricting the possible operators used. If I’m only considering steel-construction solutions, I can completely disgregard the methods for building with wood or concrete.

Agents are also equipped with a variety of heuristics that allow them to select operators that are likely to be successful. These may be specific to a particular context (problem-space), such as “use an 8 inch deep beam for spans of 10 feet or less”, or they may be more general methods such as “use what worked last time”, “use whatever will bring you closest to your goal quickest”, “use the smallest number of elements”, “use the most frequently used operators”. All these are all rules of thumb for picking operators that have a high probability of leading an agent to its goal. Because the problem-space that an agent considers is necessarily limited, it also needs heuristics to apply to the construction of problem-spaces, as well as the selection of operators used within those problem-spaces. A designer choosing to look for a steel solution for a building because that’s what he used last time is using a heuristic to construct a problem space he believes contains a path to his goal.

Operators are often specific to a certain sort of problem space. The less a goal specifies a final world-state (thus providing less structure to the problem-space), the larger variety of operators can possibly be chosen to move the current world-state towards a goal state. An agent with the ability to effectively choose from a large number of possible problem-spaces and operators can be thought of as having creativity. Creative problem solving involves solving problems that are amenable to a variety of different problem spaces and means for solving them, that can be contextualized in a variety of different ways.


Design problems tend to fall squarely in the category of “insufficiently specified goals and operators”. As mentioned previously, the problem-space for design problems is far too enormous to sift through exhaustively. As such, much of the work of design is of structuring the problem (finding a problem-space that contains the current state, the goal state, and a path between them. The actual applying of the operators is often a trivially small portion of the total design work. When we study the work of designers, what we’re really interested in is “how they select their problem spaces” and “how they choose their operators” – which are of course related.

How does this relate to our initial definition of design as the creation of structure? Structure is essentially the features that are invariant over a great number of different problem-spaces – relationships between parts that are the same for a large variety of different parts. Designers work with, and find paths through, aspects of their problem-spaces that are valid across a great number of domains. The process of, say, designing a building is largely the same whether work is being done with steel, word, or concrete – create a model of the structure, estimate how the forces will be distributed, calculate the necessary size and shape of the member based on resistance equations for that material, insert this member into the building and use it to determine the sizes of the other members. So design can be thought of as finding paths through problem-space by examining a large number of similar problem-spaces, finding the invariants in them, and using those invariants to find reliable paths to the goal state.

Since we’ve been interested in the process of science, what is science in terms of problem-spaces and operator selection? Science is simply the goal of predicting sensory input (with of course more specifics added for the human-particular methods of science). The artifacts they construct can then be used as operators for the reliable production of certain sensory inputs.


Conceptualized this way, we can see some aspects of the design process that, while important for humans, may not be representative of design in general. For instance, often in design the goal-state changes significantly as progress towards it is made. This is an artifact of our limited knowledge of our environment, and our limited ability to process that knowledge we do have. When we imagine a goal we’re incapable of imagining its full range of side effects, or the necessary sub-goals, or the side-effects of those sub-goals, etc. Partial solutions often make previously unconsidered information more salient, and new information is constantly being received from sensory input – thus, our goal structures are subject to change from moment to moment. An agent that exists in the world is necessarily going to operate under limited resources – however, it may not share the particular limitations of humans. It may, for instance, be capable of exhaustively describing it’s goals in terms of what it can perceive, but have extremely limited capacity for perception. Designing for such an agent would be much less iterative than designing for a human, as it would be clear at every step the progress that was being made towards the goal state.

Alternatively, an agent may have almost NO access to it’s goal states – perhaps only having a simple numeric scale from 1-10 representing ‘distance from current goal’. Such a design process would need to be EXTREMELY iterative to compensate for the lack of information about the goal state. Clearly both these cases are very different from how humans operate However, the abstract features described above – properly selecting the problem space and operators – would still apply to agents such as these.

Suppose someone at Point A is sending a message to us at Point B. At Point B we don’t know what the message is, but we know something about the sender. For instance, perhaps we know that the message is being sent in English, and it’s being sent one letter at a time. What can we say about the information content of this message?

For mathematical reasons, we’ll assume that the source produces these letters erdogically. A source is ergodic if the probabities of the symbols in one particular message are the same as the probabilities of the at a specific point in all possible messages – roughly, that sampling one message over a long period of time is equivalent to sampling many different messages simultaneously. Most written language is approximately ergodic.

So we have our source, and it’s producing this stream of symbols. Each symbol carries a certain amount of information But the more possible symbols there are, the more information each one carries. English words like the one our source is using are made from an alphabet of 26 characters, and are typically 4-5 letters long. But the same word in Chinese, drawing from an alphabet of thousands of characters, may be represented by a single symbol. Thus the information content is in some way dependent on the number of possible symbols we have to choose from.

Also, since we don’t know what the message is, we don’t know when each letter will be occurring. But because we know it’s in English, we know something about the probability of each letter. The structure of English words means that tetters like ‘e’ and ‘a’ will have a very high chance of occurring, while letters like q and z will have a very low chance of occurring. Because we know some letters are more likely than others, we already know something about the message we’re receiving. This means that each symbol carries a corresponding LESS amount of information. To see why this is so, imagine if we knew our source was only sending the letter ‘F’ over and over again. This would give it a probability of 1 and all other letters a probability of 0. Since we KNOW each symbol is going to be an F, this message carries no information – it doesn’t tell us anything we don’t already know. The information content of a symbol is thus maximized when we don’t know anything about the probability of getting a symbol (thus automatically assigning it 1/(total number of symbols), and minimized when we know exactly which symbol we’re going to get (assigning it probability of 1).

So the information content of an individual symbol is dependent on the total number of symbols the source could be sending, and the probability of getting each symbol. To account for the number of possible symbols the source could be sending, we’ll determine the total number of binary digits required to represent a particular symbol. To represent 4 different symbols, we need 2^x = 4 or log2(4) = 2 binary digits. To represent the 26 letters of the English alphabet, we need log2(26) ~= 4.7 binary digits. To represent the approximately 47,000 characters of written Chinese, we need log2(47,000) ~= 15.52 binary digits.

Now let’s take into account the probability of a symbol occurring. Knowing there’s 26 total possible characters, without any other information, is the same assigning a 1/26 chance to each character occurring. Log2(26) thus becomes log2(1/p), where p is the probability of getting a particular symbol (this is more conventionally shown as -log2(p). Since each symbol is now different, we factor this by the probability of a symbol occurring. We calculate this value for each possible symbol, and then add them all together. H = -Sum(plog2p) from 1 to N, where N is the total number of symbols the source could be sending.

This value H is known as the entropy of a source – it’s the average amount of information each symbol carries. Information in this context means ‘the degree that our uncertainty has been reduced’. Entropy is measured in bits, and each bit reduces our uncertainty by half.

A quick example. Say we have a 4 sided die, labeled 1-4. We know that sides 1 and 3 have a 30% chance of coming up, and sides 2 and 4 have a 20% chance of coming up. How much information does each roll of the die give us (how much is our uncertainty over the value of a dice roll reduced)?

The calculation yields -.2log2(.2) – .2log2(.2) – .3log2(.3) – .3log2(.3) ~= 1.971 bits of information. This is very close to the 2 bits that would be required for a completely fair die (2^2 = 4 possible outcomes). This makes sense, as we’ve only deviated from the probabilities of a completely fair die by a small amount (5%).

For a less mathematical example, consider the game 20 questions. The 1st player thinks of something, anything, and the second player is given 20 yes or no questions to try to figure out what the first player is thinking of. A yes or no question is equivalent to 1 bit, so a full game is capable of choosing betweens 2^20 = 1,048,576 possible answers.

This is the method by which information is quantified, and it’s the foundation of all information theory.


January 18, 2011

Previous Post: Constraints, Creation-Space and What Design Is

Design is the creation of a structure that solves a particular problem. So what, then, is structure?

Dictionaries are often surprisingly good at revealing shades of meaning that may be elusive, and this case is no exception. The relevant definition for our purposes is “the relationship or organization of the component parts…”

Does this fit with what we’ve developed so far? Structure defines relationship between parts, regardless of what those parts actually are.

So for example, the structure of a molecule is made up of the bonds between atoms, and how those atoms are arranged (ie: where they are in relation to one another.) Two molecules with the exact same component atoms will have different structures[1] if those atoms are arranged differently.

And the structure of a language (also from the dictionary) is “the pattern of organization as an arrangements of linguistic units”. The sentences “The dog jumped over the fence” and “the cow jumped over the moon” have the same structure, even though the individual words are different, because the relationships between the words are identical.

So structure is the relationship between the parts of something. These relationships are considered separate from what the parts actually are. (However, they are not independent; what the part is determines what relationships its allowed to have. The relationships partially constrain what the parts can be.)

So what does this have to do with design?

When you’re designing, in the simplest case, you have a set of functions that must be fulfilled and and a set of requirements that must be met. These are, in effect, relationships between the designed object and the outside world. This is essentially what Simon was stating[2] – when you’re designing an object you’re often only concerned about the interface – about it’s relationship to the outside world. You don’t necessarily care how those relationships are made to exist (though of course what they are largely determines what parts you use to fulfill them.) As you proceed through the design process your design is decomposed (or assembled from the bottom up) of parts put into relation to one another, until you’re left with something that satisfies your functional requirements and constraints.

At least, that how it works in this toy design process. In real life, constraints and functions are constantly shifting as we iterate through the problem and determine what exactly it is we’re trying to solve. Nevertheless, we’ve made some progress on figuring out what actually goes on during design.


1 – Wikipedia – Isomers
2 – – The Sciences of the Artificial