Will AI save the world or will we save the world of AI?
Summary about the 14th OAP workshop about AI & the articialities of intelligence

Will AI save the world or will we save the world of AI?

Will AI save the world or will we save the world of AI?

 


On June 6 and 7, the 14th Organizations, Artifacts & Practices (OAP) workshop, co-organized by Université Paris Dauphine-PSL and ESSEC, took place. The event brought together over 94 researchers from 18 countries on the theme of "AI and the artificialities of intelligence: what matters in and for organizing?". The program included five keynote lectures, two panels and 55 presentations. Drawing on works in the fields of organizational studies, information systems, sociology, philosophy and the history of technology, the aim was to put artificial intelligence into perspective with the long history of the ways in which ‘our’ intelligence has been delegated to objects, infrastructures, machines and techniques.

At the end of two days of intense exchanges, alternating plenary lectures, panels and parallel sessions, three interesting trends emerged from our discussions. I'd like to briefly summarize them here.

The first concerns the "all-encompassing" nature of AI as a new phenomenon. AI is a phenomenon presented as radically reconfiguring all dimensions of our lifestyles, consumption patterns and participation in democratic life. While many describe it as a "recurrent" innovation (AI dates back to the 1940s), most of the experts present insisted on the "bad faith" of the AI phenomenon as an unclear content (integrating formal neural networks, but also algorithms, massive human actions, plundered data and, more recently, generative tools whose contours and specificities are not so clear-cut either), a strange obsession with the revolutionary capabilities of an innovation whose contours are not obvious at all. Right or wrong as a description or a prophecy, it's ultimately our relationship with intelligence that is being called into question today. Have we entered a world that has become so hopeless? Failing to envisage an evolution of individual human capacities, or better still, of our "collective intelligences" or modes of experimentation, all hopes are now (re)placed in this new magic tool, this space that recombines all the world's data for the benefit of new data. The phenomenon is not new, but here again, the radical nature of the hopes, the total horizon of the hoped-for changes, is astonishing.

The second striking trend in the discussions (extending the previous one) concerned questions of "ontology". What is the reality or mode of existence of what might be considered AI? It is tempting to contrast and put into conversation two very inspiring keynote lectures that took place as part of OAP 2024. In the opening lecture, Antonio Casilli warned us of the illusory and "fake" nature of artificial intelligence. Ultimately, it's our efforts, our data, and more unfairly, millions of click workers, that make AI work. These invisible workers help to supervise learning, take over from the AI from time to time, or give the permanent illusion of a tool that intelligently meets our expectations. The ontology of AI is then that of an “appearance”, or even an “illusion”. Addressed to all those who stand before its interface, on the surface of the digital world, AI is fully a (critical) phenomenology. To understand AI, we need to grasp the processes of this appearance, to deconstruct its arrangements, and to denounce the phenomena of massive and distant exploitation at work in its operation. For his part, Dominique Lestel proposed a more ontological thesis in the last and final keynote lecture of our workshop. He combined namely the thoughts of Bergson and Ellul on technology to offer a more biological and processual account. AI is not "addressed to", waiting to be activated for the benefit of more or less fallacious mechanisms. For him, it is embedded in living organisms. It is probably less alive than the multiplicity of organisms that make up a human body, but "alive all the same". And these intelligent organisms have no skin. They are multiple and open to the world, with no frontier other than that assumed by a moment's mode of action. Beyond an AI exploiting and dominating vulnerable actors, Lestel insisted on the potentially insurrectionary character of "intelligent" tools on the world. These two ontologies, phenomenological and biological (or processual), both concerned with the "political" at stake with AI, are interesting to bear in mind. The question of their complementarities, cross-exclusions and alternatives is beyond the scope of this simple post.

Finally, the discussions at the 14th OAP also provided an opportunity to revisit the intersecting history of AI and management (particularly during the sessions and opening remarks). The history of management is in fact inseparable from the history of artificial intelligence and from a broader quest to delegate intelligences within the framework of a major separation between design and execution.  This is particularly striking when we look at the birth of management sciences, the formation of a global academic system of managerial knowledge. One of the founding events was undoubtedly the creation of the Academy of Management (AoM), the most central network for management researchers worldwide. It is home to the most prestigious academic journals, organizes the most structuring annual conference (its "annual meeting"), and hosts the most structuring discussions on the scientific and epistemological strategies of management communities. It was founded on December 30, 1941, on the premises of New York University. At the time, we were just a few days after Pearl Harbour, in the shock of the start of the war for the United States. The statutes adopted (see my book The rise of digital management, NY: Routledge) mention a surprising objective in the association's second clause: "The purpose of the Academy is to establish and promote a philosophy of management". But what kind of "management philosophy" are we talking about?

As several participants at OAP 2024 pointed out, this philosophy is unquestionably "representationalist". It was and still is necessary to provide managers (in courses, through consulting services, by means of in-house experts...) with techniques and, more generally, (soon to be interconnected) “systems of representation”. Using figures, texts and visualizations, we need to get as close as possible to the reality of markets, consumers, employees and innovations. This world is all about matching, putting into correspondence, the ‘representing’ with the ‘represented’. The challenge is to build up an intelligence of the situation, and then to make the right decision by taking the most realistic and rational path possible. In this direction, we need to transform the world into "data". We need to increasingly massify the collection of traces of the past to project them onto the present and future (a powerful temporal engineering). With ever more intuitive interfaces, ever more autonomous learning and the possibility of "prompts" sounding like so many assaults towards the reality of a datum, the user submits ever more to the regime of truth of a tool disappearing behind the simplicity of gestures. Here, so-called creativity is only an issue of (re)combination, aggregation, translation. Living, resonant, sensitive intelligence is more than ever replaced by the endless flow of subjectless desires. "I" is this endless series of requests, posts, videos, tweets... My being exists in this frenzy of digital threads in which it never manages to expand and reflect itself.

At the end of this particularly rich scientific discussion, we all came away a little stunned. Whatever our disciplinary field, AI crystallizes hopes, tensions and projects in a way that other research themes rarely do. The climate crisis, the transformation of work, the renewal of public decision-making, geopolitics and industrial strategies are more than ever questioned by this phenomenon, which is far from being the latest fad. Against this backdrop, everyone was also able to appreciate the critical, historical and pragmatic work currently being carried out by researchers in the human and social sciences. And as researcher from Vrije Universiteit Amsterdam, Ella Hafermalz pointed out, the AI object itself should address us more in an impertinent mode. It should equip citizens to conduct their own investigations into the world (See also Paul Pangaro on this issue). When will an AI be able to tell us "your question is stupid"? When will an AI be able to ask us "What are the presuppositions of your question?  When will we see an AI that can reformulate our questions, continually explain how it works, and provide real partnership support on the road to research that is within everyone's reach? When will we see an AI capable of resisting us in order to help us give birth to ourselves? When will AI be thought and experimented systematically in the context of larger communities of inquiry and open practices? This type of AI might require more than just corporate funding, a powerful public policy. We could then dream of an AI whose objective would be to feed knowledge commons and open communities of inquiry rather than information consumption. A living intelligence at last.

 

 

 

Emmanuelle Leon

Professor in management | Speaker | Director of the Reinventing Work Chair at ESCP

6mo
DR Eun Sun Godwin

Management and Organisation Researcher

6mo

What a fantastic summary and reflection of such intensive and rich discussions! Thank you François-Xavier de Vaujany for organising this amazing workshop - I haven't really been able to escape from the discussions and numerous questions 😅 Ingo Frommholz it is the workshop I mentioned - thought it might give you some interesting perspective 😄

To view or add a comment, sign in

More articles by François-Xavier de Vaujany

Insights from the community

Others also viewed

Explore topics