In recent years, while maintaining his practice as an international legal practitioner, Pierre Kirch has continued to contribute as a conference speaker, as a teacher and as an author. Towards the end of his BigLaw career, he trained extensively in mediation, in various organizations and in various countries. The desire to learn (and to learn how to better learn) is innate in Pierre: new technologies (in particular, the exponential progress of AI-driven technologies), new languages, new mediation cultures throughout the world. Pierre remains a European-style mediator, as “facilitator”. One of the fundamental teachings that has become ingrained in him: the idea that to assist others, as an independent, impartial and neutral outsider, to renew communication and seek out common solutions to conflict, the mediator must first work on himself. Constantly, to the highest level of self-development.
For Pierre, to be up to purpose, it is important to reflect, to write, to speak and to communicate and teach in order to transmit. In that spirit, Pierre has opted to memorialize some of his more recent professional and academic contributions in this Global Journal.
Pierre Kirch is co-author of Algorithmic Antitrust (Springer, 2022), as author of a contribution entitled “The Technology Innovation Time Gap in Competition Law Enforcement: Analyzing the European Commission’s Approach” . Indeed, during the course of his career, Pierre has endeavored to analyze cutting edge subjects touching on the intersection between technology and the law, including most recently the legal and regulatory aspects deriving from the introduction of Artificial Intelligence systems in business use, and this in the context of a wide-ranging “package” of EU measures on digital business practices and communications. In recent times, having entered into the realm of international mediation,
Pierre believes that we have reached a watershed moment in the practice of the law. The beginning of a necessary convergence between the law and science (that is to say, technological innovation).
In Pierre’s view, which he intends to develop as an author in 2025, the law can only be effective through a multidisciplinary approach. To begin with, a credible multidisciplinary knowledge of the technology itself (and its underlying science). And to embrace as well other disciplines, as appropriate and in accordance with the subject matter. The key point: how to conceive a method--a "road map"--to accomplish this objective? Could this entail a reformulation of the concept of law? As an academic goal for 2025, Pierre intends to study the issue of whether exponential technological progress is changing the very foundations of the law as we have known it.
Pierre’s original inspiration for this approach is to be found in the written version of the Rede Lecture, delivered by C.P. Snow in 1959, "The Two Cultures and the Scientific Revolution. " This was during the 1950s, when AI concepts were embryonic. Snow observed: "I believe the intellectual life of the whole of western society is increasingly being split into two polar groups.....Literary intellectuals at one pole--at the other, scientists. Between the two, a gulf of mutual incomprehension." Today, Snow could have made the same observation concerning science (technology) and the law. His observation goes back some 65 years, and yet the need to bridge the two cultures (in the broad sense: science (technology) with regard to the arts, the law, theology and philosophy) has become more crucial than ever.
There are other sources of inspiration, in particular, pioneer thinkers and explainers about AI with regard to the law. For instance, Aurelie Jean, an MIT-trained IT scientist from France, now an AI entrepreneur in Los Angeles, in her recent book on algorithms and the law (Les algorithms font-ils la loi?, 2021, Editions de l'Observatoire), affirms, "I believe profoundly in inter-disciplinarity as an approach to the big subjects of our times. The law constitutes part of them, the algorithm as well". As she explains in detail, for the legislator to regulate issues of explainability or interoperability of algorithms, it is necessary to enter into the technical functioning of the algorithm.
Let us now turn also to Margaret A. Boden, researcher in the cognitive sciences in England, author of Mind as Machine(2017): "An interdisciplinary approach is key. You have to read a hell of a lot of stuff in different disciplines .....[drawing on] classical times and involving philosophy, psychology, linguistics, anthropology, neuroscience, theoretical biology, computer science and AI. And that involves straddling the arts-science divide. You have to have a sense for language and the arts and various human aspects of psychology, as well as being able to understand scientific language in neuroscience or computers”.
Pierre’s fundamental purpose in the coming years is to analyze the chasm between the law and technology (and other disciplines), through a multidisciplinary approach. But how? He takes inspiration in the fact that an academic such as Yuval Noah Harari was able to rethink over many years and reformulate in four volumes (including Nexus, published in Q4 2024) the entire approach to our common history as homo sapiens over the last 300,000 years, which approach also provides insight into the future of humanity.
As a starting point, Pierre--as a highly seasoned legal professional—proposes to "rethink" and "reformulate" the "law" in terms of today's reality, in taking into account not only the current state of the technology (in particular, the AI technology, as built on its scientific foundations), but, also, all of the disciplines of the arts and sciences as well.
Pierre has broadened his approach, writing regularly on the different facets of international mediation, in particular via publication of two articles in the Corporate Mediation Journal entitled thus: « Rereading Fisher & Ury : Identifying the Advantages of Mediation in the Specific Setting of a Competition Law Dispute » (ed. 4/2019) and « Recourse to Mediation in Times of Crisis : Is Business Ready for a new Approach that Saves Time and Preserves Relationships, also In the Field of Competition Law » (ed. 1/2020). More recently, he contributed an in-depth analysis of the relationship between mediation and arbitral procedures in matters falling within the procedures of the International Chamber of Commerce: contribution published by Juris Publishing in 2022 in a volume of essays: “ Reflections on International Arbitration : Essays in Honor of Professional George Bermann.”
On October 31, 2024, Pierre Kirch was moderator of the main theme at the annual congress of the Union internationale des Avocats (UIA), “Can—and should—Artificial Intelligence be regulated?” More than 1200 lawyers from some 80 countries around the world attended the Congress. As moderator, Pierre brought together panelists from around the world to discuss not only the legal effects of AI tools, but also the socio-economic effects. As for the law, a particular emphasis was placed on the new legal ecosystem constituted around the EU’s AI Act, promulgated in July 2024. In particular, the issue was raised whether progress of AI technology at an exponential rate made it impossible for any regulation of AI based on risk criteria to be current in terms of the available technology. Another issue revolved around liability for algorithms at each stage of a supply chain. Klára Talabér-Ritz, of the European Commission’s Legal Service in Brussels, was the keynote speaker. A summary of the main theme subject and a list of the panelists are shown below.
UIA ANNUAL CONGRESS (PARIS):
The keynote speaker for the main theme on the regulation of Artificial Intelligence systems was Dr. Klára Talabér-Ritz, Legal Advisor in the European Commission’s Legal Service. Her presentation was made in two parts. First, she drew up an overview of the European Union’s recently promulgated AI Act, with an explanation of the Commission’s future implementing regulations and guidance communications of what will be in essence, a new legal ecosystem within the European Union functioning around the new AI Office of the Commission. Secondly, she described the ways and means of the digital overhaul of the knowledge management function of the Legal Service (more than 200 legal advisors, with support staff). In addition to the keynote speech and ensuing discussion, there were two panel discussions in which UIA lawyers were joined by guest speakers from the legal, business and academic worlds. The first panel, presided by Professor Jean-Paul Vulliéty (Geneva), conducted a multidisciplinary discussion concerning the effects of AI on society and general, leading to its regulation. The discussion ranged from (i) an explanation by the President of the Paris Bar, Pierre Hoffman, of the means deployed to support the 35,000 members of the Paris Bar in the use of generative AI tools in their work as lawyers, to (ii) the observations of Jean-Gabriel Ganascia, Professor at the University of Paris I Panthéon-Sorbonne, on how the EU’s “risk-based” approach to AI regulation would be difficult in its real-world implementation, due to the constant evolution of AI technology leading to ever-changing forms of risk, to (iii) the account of Jean-Rémi de Maistre, CEO & Co-Founder of Jus Mundi (Paris), on how he was motivated by his own experience as a young lawyer to launch his company as a start-up providing, via algorithmic tools, applicable international legal sources on a worldwide basis for those cases in which lawyers were confronted with international elements in their work. The second panel was presided by Pierre Kirch, memberof the Paris & Brussels bars. It involved an animated discussion concerning Artificial Intelligence and litigation on two specific issues: (i) the litigation of AI disputes and (ii) the use of generative AI tools by litigators. Much of the panel discussion amongst litigators Anna Gressel (New York), Gérard Haas (Paris) and Yoshihisa Hayakawa (Tokyo) turned upon the uncertainty of applicable law in AI disputes, involving new technologies. A key example: the expected introduction of AI agents in the business world in 2025 and the inevitable complex issues of liability which will result therefrom. Ian McDougall, newly appointed President of the LexisNexis Rule of Law Foundation (London) and Marco Imperiale of Better Ipsum (Milano), addressed issues of how litigation lawyers should benefit from technological innovation, and actively use available AI tools in their work. The ethics of use of AI tools was a recurring theme, with Jeff Bullwinkel of Microsoft EMEA (London) making a presentation of his group’s approach, not only within Microsoft itself but also in dealings with partner companies in the Microsoft ecosystem. The panel’s overriding concern: In the face of legal uncertainty and new laws and regulations such as the EU’s AI Act, what is the future of managing conflicts in the “AI Age” within not only judicial and arbitral procedures, but also through mediation processes which may emerge as a newly attractive means of conflict resolution to the business world in cases where AI systems are involved in the conflict? As emphasized consistently during both main theme panels, the UIA has articulated its own official position on the use of Artificial Intelligence tools on a worldwide scale, by publication of its Guidelines for Use of AI by Lawyers upon the occasion of the 2024 Annual Congress.
Pierre Kirch
Avocat à la Cour (Paris & Brussels)
Moderator of Main Theme on the Regulation of Artificial Intelligence
REPONSE DE PIERRE KIRCH (31 octobre 2024):
« Le progrès technologique dans l’industrie de l’intelligence artificielle est devenu exponentiel. Il s’agit de la conjugaison de trois éléments au cœur de l’IA « moderne » : la puissance des ordinateurs (les puces, inexorablement plus performantes), la disponibilité des données à une échelle massive (« Big Data »), grace en particulier à l’emploi quotidien des réseaux sociaux par des milliards de personnes, ainsi que les algorithmes LLM (« Large Language Model ») fonctionnant avec des centaines de milliards de paramètres dans certains cas pour les modèles de l’IA générative (par exemple, ChatGPT d’OpenAI ou Gemini de Google). A cela s’ajoute un facteur clé : la communication concernant le progrès technologique, transparente et immédiate, en raison du fait que huit milliards d’êtres humains communiquent instantanément entre eux par une pléthore de moyens, à commencer par les réseaux sociaux (X, par exemple). Selon moi, cette communication immédiate à l’échelle planétaire facilite le rythme du progrès technologique, car toute personne peut tout savoir de façon instantanée quelle que soit sa situation géographique ou sociale/professionnelle.
« Lors du congrès annuel de l’UIA tenu la semaine dernière à Paris, nous avons échangé sur le rôle de l’avocat dans cette nouvelle vague technologique. Le thème principal était « Peut-on et doit-on réguler l’intelligence artificielle ? » Mais au-delà du thème principal, que nous avons traité au cours de la première journée du congrès, de nombreuses commissions ont également traité de thèmes spécifiques concernant l’intelligence artificielle dans leur domaine d’activité, lors des deux journées suivantes du congrès. Par exemple, la commission de l’UIA sur la médiation a organisé une séance fascinante sur le bon usage des outils de l’IA en médiation, sur la base d’un cas audio-visuel pratique dans lequel ChatGPT était nommé co-médiateur avec un co-médiateur humain pour analyser un cas de médiation hypothétique.
...
Pierre Kirch
Avocat aux barreaux de Paris et de Bruxelles
Coordinateur du thème principal sur la régulation de l’intelligence artificielle
Program for panel discussion to be led by Pierre Kirch in Vienna on Friday January 17, 2025, upon the occasion of the 33rd World Forum of Mediation Centres:”The Mediation of AI-Sourced Labor Disputes.” In brief: “Tsunami” is a word often used to describe the impact of Artificial Intelligence on the workplace. The permanent destruction of millions of jobs, or a prcess of “creative destruction” with the emergence of new work patterns? Inevitably, the disruption—whatever its form—will be systemic. Conflict will be inevitable. How to resolve? Can mediation as a dispute resolution method be the answer to the particular challenges brought by the forces of AI disruption in the workplace? Will new skills be required of mediators to effectively facilitate the resolution of such AI-sourced labor disputes?”
By Pierre Kirch, Avocat à la Cour (Paris & Brussels Bars)
SYNOPSIS
Each term used in the title of this course is important: “Multidisciplinary approach to a law of Artificial Intelligence”. Why “a” law and not “the” law? For a very simple reason: researchers in the field of AI (there are perhaps millions of them, in many thousands of laboratories throughout the world), philosophers and AI entrepreneurs all agree: Today, on a worldwide scale, there is no law of Artificial Intelligence per se. There is no true, totally comprehensive law of “Intelligence by Design” as opposed to Biological Intelligence, or Human Intelligence. But there are more and more rules which apply to specific issues of Artificial Intelligence. In particular, since 2017, proposals have emerged from European Institutions. In July 2024, the European Union promulgated the AI Act:, purported to be the first legislation of its kind in the world). This is just the beginning, and the true contours will emerge gradually in the form of implementing legislation and guidance documents, codes of conduct, etc. Multidisciplinary? That is to say “interdisciplinary without limits”: not just AI as technology, but all disciplines at the crossroads of man, machine and, when relevant, the law: the “Big History” of our universe and the emergence of human intelligence (“homo sapiens”), philosophy, psychology, cognitive sciences, genetics, law and ethics (of course). The course goes into the symbiosis between data/big data, the operation of basic algorithms and the different types of AI (narrow AI/ general AI) and new techniques of “AI methods and learning”: Artificial Neural Networks/ Deep Learning, Supervised/Unsupervised and Reinforced Learning. There are many multidisciplinary questions to deal with. Where does human intelligence come from and how has it evolved (the role of tools, to “augment” biological intelligence)? Where does law come from? What is its function? This leads to a whole series of questions about how the emergence of Artificial Intelligence can have an impact, and bring on new challenges. There are legal and ethical challenges, in a number of sectors, such as the material world (Internet of Things) the workplace (automatization/robotization), predictive crime control and justice, medicine/healthcare (temptation of transhumanism?), governmental data bases and practices (use of facial recognition algorithms and creation of universal data bases concerning citizens). In sum, this is a course concerning “Artificial Intelligence and the Future of Humanity” in its Legal and Ethical context. The aim of the course is to arrive at an understanding of Artificial Intelligence which would allow getting it right legally and ethically. The challenge is enormous. As Stephen Hawking said famously, shortly before his death in 2018, “Success in creating AI would be the biggest event in human history (…) Unfortunately, it might also be the last, unless we learn how to avoid the risks.” To avoid risks, law and ethics are at the heart. That is what we call a “humano-centric” approach to Artificial Intelligence.
By Pierre Kirch, Avocat à la Cour (Paris & Brussels)
2025 SEMINAR SYNOPSIS
In 2025, it is expected that leading AI companies such as Microsoft and Anthropic will bring to market an advanced form of generative AI tools, known as “agents”. AI agents are intended to be Large Language Model (“LLM”) algorithmic tools that will be able to make advanced probabilistic analyses allowing them to change their environment autonomously of human will, that is to say make decisions. The developers of AI agents often use the verb “to reason” to explain how AI agents will function in companies: based on input, the AI agent will be able to “reason” and then act autonomously. The introduction of AI agents into the workplace, if it takes place as expected in 2025, will have considerable legal consequences. The purpose of this seminar is to analyze those consequences. For students, this means the acquisition of a complete mastery of the technical functions of AI agents. In the seminar, each student assumes the role of CEO of a well-known multinational company. The student makes an inventory of the use of AI tools currently used by the multinational company. The student then develops its own project as CEO to introduce agentic AI tools into the company and prepares a memorandum to the Board of Directors to propose and explain the project (and the project will later be defended by the student in a business meeting with peers). Meanwhile, in the seminar discussions, the key legal issues will be examined, such issues as liability for decisional operations made via the AI agent, competition issues where—as Anthropic anticipates—AI agents will collaborate directly with each other, or issues of whether a third type of legal person should be created: after physical persons and legal persons (corporations), electronic persons.