290 likes | 396 Vues
Longevity: Designs that last. David D. Clark MIT CSAIL October, 2009. Background. The Internet has lasted a very long time, perhaps 35 years. Not as long as the phone system. Longer than most other IT artifacts. C: 1972 Unix: about 1969 We can argue about whether it is showing its age.
E N D
Longevity:Designs that last David D. Clark MIT CSAIL October, 2009
Background • The Internet has lasted a very long time, perhaps 35 years. • Not as long as the phone system. • Longer than most other IT artifacts. • C: 1972 • Unix: about 1969 • We can argue about whether it is showing its age. • How much have Unix and C morphed and produced descendents?
Going forward • If we are going to contemplate a future Internet, we should have longevity as an explicit goal. • So how do we reason about longevity and how to achieve it? • My thesis: there are many theories of longevity, and we need a “multi-theory analysis of longevity” to succeed.
Three general classes • Theories of change • The world changes and the system must evolve to track those changes. • Theories of stability • A stable platform is the foundation of a plastic superstructure. • Theories of innovation
A word about architecture • With respect to change, two points of view. • A stable architecture that supports change • (In other words, the architecture is the part that does not change—a theory of stability.) • An architecture that can evolve to track change. • (A theory of change.)
Two architectural theories: • The theory of ossification (Belady and Lehman 1976). • Their First Law of Program Evolution Dynamics • An episodic theory of change: systems lose the ability to change over time, and have to be redone from scratch. • The theory of utility • Before we discuss longevity, we need to ask why the architecture is useful. • So any theory of longevity has a (perhaps unstated) theory of utility inside it.
The theory of the general net • If a network were fully general, then it could be used for anything (the theory of utility) so it would never need to evolve. • A theory of stability. • Making it practical: define a network in terms of an ideal general service and the impairments to that service. • A maximally general network can trade off impairments along a irreducible frontier.
The theory of real options • As an architecture design principle: how far off the frontier (for today’s popular applications) are we willing to be, so as to be able to move in the space of tradeoffs in the future?
Theory of tussle • The theory of the general net is incomplete—it only expresses the objectives from the perspective of the communicants. • Other actors must be taken into account in the architecture if the network is to survive tussle. • Design to minimize the consequences of tussle • Try to prevent tussle from disrupting the architecture.
Sub-theories • Tussle isolation. • Placement of interfaces • The theory of the firm. • Removal of interfaces. • A theory of stability, hegemony and market power. • Asymmetric tools of tussle. • Who do you arm? • The theory of the blunt instrument.
Theories of decomposition • Data-centric • What I called data enclosures. • Isolation of high-value information. • Regions with distinct needs for assumptions about trust. • Efficient positioning of interactive components. • Economic decomposition. • Isolation of components that require quick upgrade/enhancement. • Security defenses. • Isolation of components that capture cultural norms. (very abstract statement, I know…)
Change or stability? • Tussle may seem like a theory of change. • In fact, it is more a theory of stability. • Dynamic stability. • A system that can survive tussle is one that can survive the process of being brought back to a stable state if perturbed. • Tussle as a homing sequence… • It is a system that can have different outcomes, so can be different in different times/places. • Tolerance of diversity is a path to longevity.
The theory of building blocks • In this theory, the goal is not a fixed definition of a general service, but a system in which it is easy to add new services. • E.g. a system that supports general service composition. • The theory of maximal functional composition. • The obvious idea: how can the net support composition of services? Addressing, routing, etc. A theory of change. • The extreme: the theory of programmable elements. • The theory of minimal functional composition. • Service composition will be a point of tussle. Design to minimize the impact. A theory of stability.
The theory of the stable platform • A stable platform invites implementers to use that platform, and depend on it. • E.g. the IP layer, the TCP layer, etc. • Or various APIs. • A theory of stability, popular in business schools. • A dynamic theory: a critical mass of complementers will push for the stable platform (a form of tussle) and lock it in. • A special case of the theory of layering.
Similar theories • The theory of the stable platform is similar to the theory of the general network. • (A theory of usability). • The theory of semantics free delivery. • The end-to-end principle (in one form). • In contradiction to theories of service composition.
Theories of global agreement • One definition of architecture is that it captures that part of the system about which “we” must agree. • The theory of strong agreement. • A theory of stability. The more we can agree to, the more stable and predictable the platform, the more attractive to complementors, who lock the system in. • The theory of weak agreement. • A theory of change. The less we have to agree to, the more plastic, adaptable, tussle-proof, etc. the system.
The snare of false agreement. • When is an agreement an agreement? • When we write it in a standard? • When we embed it in lots of code? • Can agreement be dictated, or is it an emergent property. • Is incentive alignment the best way to launch a candidate agreement? • “Carrots trying to grow up to be sticks.” • A theory of utility.
Incentive alignment • How can we create carrots, that then grow up to be sticks? • Hard to get everyone to agree, even to try something. • Need an architecture allows groups with aligned interests to try new ideas. • Regional outcomes? This relates to design for tussle. If outcome can be different in different places, then we can work with “places” that have aligned interests to plant carrots.
Technology independence • A long-lived system must survive the inevitable progress of technology. • A sub-class of the theory of the stable platform. • Example: the theory of the hour-glass. • A contrarian theory: the theory of cross-layer optimization. • This theory states that a long-live architecture must exploit new technology features, not bury them under a stable interface. Otherwise, new technology will eventually make the architecture obsolete.
Theory of downloadable code • Instead of stable standards, make it possible to upgrade all the devices as necessary. The download capability becomes the stable platform. • If applied to routers, equals active networks. • If applied to the end-nodes, very powerful. • What we do today at higher levels. • Certainly a theory of change. • Hints at virtualization.
Is change hard? • And why? • Global coordination. • When do version numbers help? • Deployed code. • Automatic update and download. • So why is this hard? • The theory of weak (minimal) agreement suggests that if we can just agree that all systems should support dynamic download, there is little we cannot change, and this is a good outcome.
Simplicity and complexity • There are strongly held beliefs that either simplicity or complexity are the key to a useful/long-lived network. • Simple nets (if useful) are often general nets. • Complex nets have features that let them do lots of stuff. • There is certainly more to be said here, but I am not organized to say it.
Theory of hegemony • In this theory (a theory of stability), dominating the market aligns actors to your approach, which is then further embedded by market pressure. • A theory of the tipping market, perhaps. • A dominant player repressed tussle, adds predictability (e.g. to the platform), and thus invites innovation on the platform. • A theory of innovation based on stability.
The current Internet Supports: the theory of the general network, the theory of the stable platform, the theory of semantics-free service, the theory of technology independence, the resulting theory of the hourglass, perhaps the theory of minimal global agreement, and (to some extent increasing over time) the theory of downloadable code (in the end-nodes). Rejects: the theory of hegemony, and the theories of composable elements and downloadable code in the network.
The current Internet • Global agreement • Often (e.g. addressing) it was false agreement. • TCP: never mandated, but mandatory. • TCP-friendly congestion: an attempt to force agreement. • DNS: never mandated, but mandatory.
Future networks • Security • The ultimate tussle. • Isolate security elements so they can be replaced. • All elements will be targets • Composable functions will add to the suite of targets. • Management • Will lead to new peer to peer interfaces • Not sure what theory of longevity should apply.
Future networks II • Virtualization • A theory of innovation based on the theory of a stable platform. The control interfaces are what has to be stable. • A new theory of the general network. • Virtualization presents the higher layers with “raw” virtual resources, so any impairments are intrinsic. • Not a theory of technology independence • Higher layers must each deal with anything new. • Requires a tussle analysis.
Future networks III • DTNS and information dissemination. • (I lump them because they are “staged” paradigms. ) • A more general platform than simple source-destination delivery. • But that is in exchange for more complexity.
A final comment • (Which John Wroclawski feels strongly about.) • A simplistic CS point of view is the best system is one in which you can change anything, because then there are no constraints. • This is much too simplistic. • The theory of the stable platform, etc.