1 / 155

Intelligence vs. Self- organization in an Hybrid Society _____________________

Explore the emergence of Socio-Cognitive-Technical Systems, where social interactions are mediated by artificial intelligence. Discover the potential of augmented minds and the challenges of self-organization in this hybrid society.

porfirio
Télécharger la présentation

Intelligence vs. Self- organization in an Hybrid Society _____________________

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Intelligence vs. Self-organization in an Hybrid Society _____________________ Cristiano Castelfranchi Institute for Cognitive Sciences and Technologies - Roma

  2. Premise and Issues • “Socio-(Cognitive-)Technical Systems” • What we are unavoidably building with computer networks, AI, and Ag technologies are Socio-Cognitive-Technical Systems: • Socio-Technical System in fact means that any new technology implies/requires/introduces not only new skills and competences, but • - new expectations, goals, beliefs; • - new "scripts", with their roles, norms; • - new form of interaction and conventions among the social actors. • So we have to specify the "cognitive" and interactive side of the new system. • We are Social Engineers; are we aware of that?

  3. Premise and Issues “Socio-Cognitive-Technical Systems” BUT…. more thanthat: >> The technologyitself is “intelligent” and “interactive” and “proactive”: intelligent tools, intelligent environments; … not just “tools” but an HYBRID SOCIETY of physical and virtual, human and artificial social actors, with their roles, powers, competences, goals.

  4. A hybridnew society & intelligence where social interaction of any kind is not just between humans, but instead it is mediated by artificial systems. In such a society, there are social interactions (in the proper sense), social networks, and collective behaviors between humans and software agents, humans and robots, robots and agents, and so on for all possible combinations – including, less and less frequently, even interactions between humans without any technological mediation. Not only “mediated” by AI agents but including them as partners

  5. Premise and Issues “Socio-Cognitive-Technical Systems” BUT…. more than that: >> An extended and augmented body but also an augmented mind (memories, reasoning, data, predictions, problem solving, know how, ….) in an augmented reality: not just the smart physical one and the virtual one (a “second life”), but their coupling.

  6. Premise and Issues “Socio-Cognitive-Technical Systems” MOREOVER ….. this new complex Socio-Technical (and mental) System cannot be just planned and designed. It is dynamically emerging and self-organizing: it is a spontaneous Social Order (von Hayek); a dynamic equilibrium not necessarily "good" for the goals of the actors. What we have & need is not just a top-down organization and control.

  7. Premise and Issues • “Socio-Cognitive-Technical Systems” • Thereis a problem of the • LIMITED HUMAN (SOCIAL) INTELLIGENCE • Not only as “bounded” and biased Rationality (Simon) • but for “complexity” dynamics and hyper-connection • for the hidden and delegated computational intelligences • for the intrinsic blindness and pretending attitude presupposed by “institutional” constructs. • ?? Will the new coupled world and hybrid intelligence/society reduce human stupidity?? • Help us to have a better individual understanding and participation, and a better collective intelligence??

  8. Computational & Cognitive Costs (“bounded rationality”) and Negotiation and Transaction Costs would be unbearable! H. Simon has explained us why not only human intelligence is limited, bounded, and biased, but why it has to be so in principle. Now we need a new Simon for explaining us at the collective level not just that > collective intelligence is in fact biased, irrational, limited, as a result of individual intellectual limits and of socio-psychological phenomena (like mass psychology) but > how and why it has to be so, also for functional reasons. In primis costs, uncertainty reduction, predictability, …

  9. Our general PERSPECTIVE - 1 The“Cognitive Mediators”of Social Phenomena Social and cultural phenomena cannot be deeply accounted for or supported without explaining how they work through the individual agents’ minds (mental “counterparts” or “mediators”). This requires a richer cognitive model (architecture) for “Artificial Intelligences” moving from formal and computational AI and ALife models, closer to those developed in psychology, cognitive science, and in cognitive approaches in economics, sociology, organization studies. “The most important fact concerning human interactions is that these events are psychologically represented ineachof the participants” (Kurt Lewin, 1935)

  10. Our general PERSPECTIVE - 1 “COGNITIVIZING” Cooperation, Conflict, Power, Social ‘Values’, Commitments, Norms, Rights, Social Order, Trust, … ______________________________ Pareto, Garfinkel, … the aim of founding the Social Sciences as Autonomous from Psychology

  11. Our general PERSPECTIVE - 1 The “Cognitive Mediators”of Social Phenomena Social phenomena are due to the agents’ behaviors, but… the agents’ behaviors are due the the mental mechanisms controlling and (re)producing them. (Castelfranchi, Conte, Miceli, Falcone,…) For example: My Social Power lies in, consists of, the others’ Goals & Beliefs!! That’s why we need Mind Reading! Not for adjusting ourselves, but for manipulating and exploiting the others or for helping or punishing them.

  12. Our general PERSPECTIVE - 1 • The “Cognitive Mediators”of Social Phenomena • Social phenomena are due to the agents’ behaviors, but… • the agents’ behaviors are due the the mental mechanisms controlling and (re)producing them. (Castelfranchi, Conte, Miceli, Falcone,…) • For example: • How the normshould work through the minds of the agents? How is it “represented”? • ?Which are the proximate mechanisms underlying the normative behavior? • Could we monitor and support them in an hybrid society • without deeply (not behaviorally) understanding and building them??

  13. Our general PERSPECTIVE –2 However, Mind is not enough! the “individualistic + cognitive” approach is not sufficient for the social theory and processes(even when modeling joint and collective attitudes and actions). The social actors do not understand, negotiate, and plan for all their collective behavior and cooperative activity. (Lewin’ ambiguity)

  14. Premise and Issues A "cognitive" foundationof social interaction and collective and macro-phenomena, that is, the identification of the needed mental mediators, the (individual or collective) "representations", behind the macro dynamics or relations, induces to a disappointing conclusion; rather close to the idea of theunavoidable "alienation"of human powers, the unavoidable Leviathan face of Power, and the impossibility of direct and fully transparent demo-crazy ("government") of society. Such conclusion is that: there are several structural and functional reasons at the collective level why “we” have to be (individually and collectively) blind.

  15. Premise and Issues “Socio-Cognitive-Technical Systems” We have to "understand" and to reproduce also how humans do socially construct something without understanding it! How is it possible that intentional agents do not intend the functions of their collective behavior? Which the relationship between emergent functions and intended goals? CAN WE SUPPORT HUMAN ORGANIZATIONS & BUILD EFFECTIVE SOCIAL SYSTEMS WITHOUT UNDERSTANDING AND GOVERNING THAT!?

  16. Minds as Cordination Artifacts 1

  17. Social interactions organize, coordinate, and specialize as "artifacts", tools; not only for "coordination" but for achieving something, for some outcome (goal/function), for a collective "work"; >> these artifacts specify (predict and prescribe) the mental contents of the participants, both in terms of beliefs and acceptances and in terms of motives and plans; >> to revise the "behavioristic" view of "scripts" and "roles” When we play a role we wear a "mind". No collective action without shared and/or ascribed mental contents. This is also very crucial for a crucial form of “automatic”mind reading (mind ascription).

  18. >It is really crucial for social agent architecture and for understanding social coordination and cooperation Not only Mind Reading or Ascription (Beliefs + Goals + Emotions, …) But Mind modification: modifying MY mind and modify HIS mind for coordinating our actions: That is: GOAL ADOPTION & GOAL INDUCTION “Mind reading” is for that (more than for prediction!): Coordination is based not just on reading minds but on ADOPTING minds and ASCRIBING (PRESCRIBING) them.

  19. Mind reading is also for manipulating the others. And in general social scripts and our social action are aimed at shaping the other’s mind. Moreover, part of these assumption of beliefs in the others’ minds, and in my mind, is just simulated; we behave “as if” the others had those beliefs in their brain, but actually they need not even be there, recorded in some memory file; social coordination works “as if” we had a mind.

  20. Mind it self is a social Artifact >> what really matters is the ascribed/prescribed, worn, mind. We have to "play"/pretend (like in the symbolic play) "as if" we had those mental contents. This social conventionand mutual assumption makes the interaction work, and allows "the play we're playing" (Garfinkel). >> The ascribed beliefs and goals are not necessarily explicitly there; they might be just implicit as "inactive" (we act just by routine and automatically) or implicit as "potential". >> The coordination and social action works thanks to these ascribed and pretended minds, thanks to those conventional constructs. Our social minds are social "institutions".

  21. Ascribed and endowed minds are the crucial Coordination artifacts in cognitive agents. A part from Communication for informing you and Influence/Manipulationand Adoption, for inducing you to do something

  22. TWO Coordination Artifacts are crucial Because they create the Common ground, the presupposed shared knowledge: a) scripts, games, rules, norms, institutions, practices,… Garfinkel’s order b) ascribed and endowed minds: what you have to, are prescribed to and assumed to assume, will, do, .. Your prescribed/expected mind.

  23. Actually alsoCommunication is for shaping your mind; and also scripts, rules, norms, games, are for that… So: the central device is Mind shaping and presupposing: in fact: Behavior we need to coordinate are REGULATED by shapes minds; so, either minds are coordinated or no coordination will be there. Minds in social species are there (also) for that.

  24. Mind is not enough Emergence, Self-OrganizationFunctions and Cognitions 2

  25. COLLECTIVE • STRUCTURES • & BEHAVIOURS Bel --> G --> action INDIVIDUAL MIND Mind is not enough emergence & immergence notonlyknowledge, mutualbeliefs, reasoning, sharedgoals and deliberately constructed social structures and cooperation

  26. q r p or and p q q Levels of emergence... Cognitive emergence: awareness G objective DEPENDENCE network Agents in a common world (INTERFERENCE)

  27. For a (Pessimistic) Theory of Spontaneous Social Order A critical homage to F. von Hayek

  28. I will examine: • the crucial relationships between the intentional nature of the agents' actions and their explicit goals and preferences, and the possibly • unintended 'finality' or 'function' of their behavior. • in favor of 'cognitive architectures' in computer simulations. • propose some solutions about the theoretical and functional relationships between agents' intentions and non-intentional 'purposes' of their actions. • 'Social order' is not necessarily a real 'order' or something good and desirable for the involved agents; nor necessarily the best possible solution. • It can be bad for the social actors against their intentions and welfare although emerging from their choices and being stable and self-maintaining. • Hayek's theory of spontaneous social 'order' and Elster's opposition between intentional explanation and functional one will be criticized.

  29. “THE core theoretical problem of the whole social science” (Hayek )

  30. “THE core theoretical problem • of the whole social science” (Hayek ) • "This problem (the spontaneous emergence of an unintentional social order and institutions) is in no way specific of the economic science.... it doubtless is THE core theoretical problem of the whole social science" (von Hayek, Knowledge, Market, Planning) • the problem is not simply how a given equilibrium or coherence is achieved and some stable order emerges • To have a "social order" or an "institution", spontaneous emergence and equilibrium are not enough. They must be "functional".

  31. Adam Smith’s "invisible hand" • Adam Smith’s original formulation of “THE problem” is much deeper and clearer • The great question is how: • "(the individual) - that does neither, in general, intend to pursuethe public interest, nor is aware of the fact that he is pursuing it,... is conduced by an invisible hand to pursue an end that is not among his intentions" (Smith, ). • Hayek like Smith in acknowledging the teleological nature of the invisible hand and of spontaneous order, cannot avoid attributing to it • a (positive) value judgment, a providential, benevolent, optimistic vision of this process of self-organization (ideologism).

  32. In the “Invisible Hand”: • 1) there are intentions and intentional behavior • 2) some unintended and unaware (long term or complex) effect emerges from this behavior • 3) but it is not just an effect, it is an end we “pursue”, i.e. its orients and controls -in some way- our behavior: we "necessarily operate for" that result (Smith). • - how is it possible that we pursuesomething that is not an intention of ours; that the behavior of an intentional and planning agent be goal-oriented, finalistic (‘end’), without being intentional; • - in which sense the unintentional effect of our behavior is an "end”??

  33. Theory of “Function” This problem appeared in other social sciences as the problem of the notion of "functions" (social and biological) impinging on the behavior of anticipatory and intentional agents, and of their relations with their "intentions".

  34. Social Functions and Cognition • a) no theory of social functions is possible and tenable without clearly solving this problem; • b)without a theory of emerging functions among cognitive agents social behavior cannot be fully explained. • Moreover: we have to buildsocial functions and spontaneous orders(conventions, conformity, …) in Agent-supported human organizations and in open MAS • not only good • intentionally cooperating/competing systems

  35. Social Functions and Cognition • However in the new Hybrid, coupled, augmented reality we could: • - Reduce human informational, cognitive, affective “handicap” by providing much better, reliable, update data, instructions, predictions, .. • - And also making explicit and even visible to the individuals the distant effects and the supposed “functions” of their conduct • - And explaining the “aim” of norms, rules, etc. • And give people voice and participation and proposal power in discussing these rules and outcomes, and in deciding about.

  36. Social Functions and Cognition • Functions install and maintain themselves parasiticalto cognition: • functions install and maintain themselves • thanks to and through agents' mental representations • but not as mental representations: • i.e. without being known or at least intended.

  37. Social Functions and Cognition • While Social Norms emergence and functioning require also a (partial) "cognitive emergence", • Social Functions require an extra-cognitive emergence and working • For a Social Norm to work as a Social Norm and be fully effective, agents should recognize and treat it as a Social Norm. • On the contrary the effectiveness of a Social Function is independent of agents' understandingof this function of their own behavior: • a) the function can rise and maintain itself without the awareness of the agents; • b) if the agents intend the results of their behavior, these would no more be mere "social functions" of their behavior, but just "intentions".

  38. The problem: • Emergence and Functions should not be • what the observer likes or notices, • (“just in the eye of the beholder”) • but should be indeed observer-independent, • based on self-organizingand self-reproducingphenomena, >>> "positive”, “good” can just consists in this. • We cannot exclude "negative functions" (Merton) (kako-functions) from the theory: perhaps the same mechanisms are responsible for both positive and negative functions. • >> Two kinds of finalistic notions: • - evolutionary finalities, adaptive goals; and • - mental ends (motives, purposes, intentions).

  39. Intentional behavior Vs.functional behavior • Finalistic systems: • There are twobasic types of system having a finalistic (teleonomic) behaviour: • Goal-oriented systems - (Mc Farland, 1983), • Goal-governed systems • a specific type of Goal-oriented system based on representations that anticipate the results

  40. Goals vs. “Functions”

  41. MAIN PROBLEMS • If a behavior is reproduced thanks to its good effects, that are good relatively to the goals of the agent (individual or collective) who reproduces them by acting intentionally, there is no room for "functions” (Elster). • If the agent appreciates the goodness of these effects and the action is replied in order to reproduce these effects, they are simply "intended". • How is it possible that a system which act intentionally and on the basis of the evaluation of the effects relative to its internal goals reproduces bad habits thanks to their bad effects?

  42. >> ?? a behavioristic reinforcement layer (van Parijs) • together with • >>a deliberative layer (controlled by beliefs and goals) ??? • the deliberative layer accounting for intentional actions and effects, • the behavioristic layer (exploiting conditioned or unconditioned reflexes) accountingfor merely "functional" behaviors?? • Are “functions” and “roles” just impinging on ‘habitus’ ???(Bourdieu), while • intentions would just be for personal purposes?? • Our problem is indeed that: • intentional actions have functions! • Goalsand beliefsof the agents have functions.

  43. The fundamental problem is • how to graft teleological but unintentional behaviours precisely on intention-driven behaviours. • [WE HAVE TO BUILDTHIS KIND OF HYBRID SYSTEMS] • What answer can be given to Elster according to whom the idea of intention makes that of the function of behaviour impracticable and superfluous. • How can intentional acts also be functional, that is, • unwitting but • reproduced precisely as a result of their unintentional effects.

  44. Why also Kako-functions? How is it possible?

  45. Why also kako-functions? • - themechanism that install a bad function can be exactly the same installing a good one • - to definitely separate a functional view of behavior and society from any teleological, providential view (functions can be very bad and persist although bad) • - kako-functions cannot be explained in a strictly behavioristic framework of reinforcement learning: the result of the behavior can be disagreeable or useless, but the behavior will be "reinforced", consolidated and reproduced.

  46. Unexpected evil effects exist, or evil effects combined with good individual intentions (Boudon, 1977) in which • the intended good effects reproduced • in spite of the negative consequences. • This is true, • - both in the case in which the evil effects are not perceived or are not attributed correctly, • - and in the case in which they are perceived • (in the second case the good effects must be subjectively more important and in any case preferred (for instance, be closer in time), or else are more conditioning/reinforcing than the evil effects)

  47. But there are also harmful effects capable of self-reproduction(through the action) precisely because of their negative nature(Castelfranchi, 1997; 1998b; 1998d). a long line of automobiles and the slowing down due to the simple individual intention of rapidly glancing at an accident that has occurred in the other lane

  48. The notion of ‘function' as an effect selecting and reproducing its own cause How is it possible for a system that acts intentionally on the basis of an evaluation of the effects vis-à-vis its own goals, to reproduce bad habits precisely as a result of their bad effects? And even more crucially - if a behaviour is instead reproduced thanks to its good effects with respect to the (individual or collective) goals of the agent who reproduces them by acting intentionally, then there is no room for the "functions".

  49. It is necessary to have complex reinforcement learning forms not merely based on classifiers, rules, associations, motor sequences, etc. but operating on the cognitive representationsgoverning the action, that is, on beliefs and goals. • In this view "the consequences of the action, which may be more or less consciously anticipated, nevertheless modify the probability of the action being repeated the next time in similar stimulus conditions " (Macy, 1998). More exactly: • the functions are simply effects of behaviour which go beyond theintended effects but which can successfully be reproduced because they reinforce the agent's beliefs and goals that give rise to this behaviour.

  50. How Social Functions are implemented through cognitive representations The basic model

More Related