By Samuel Pouyt, Technical Leader and Computational Law Expert
In a 2018 article I authored, “How AI Assistants will impact businesses and consumers,” I championed the potential of AI assistants to reshape the landscape of business and consumer interactions. At the time, it was evident that AI-driven platforms were poised to revolutionize how people accessed information, made decisions, and connected with brands. A central tenet of this transformation was the recognition that cultivating and retaining customer trust was pivotal to the widespread acceptance and success of AI assistants.
A central theme of the article was the capacity of AI assistants to analyze diverse data sources, spanning news, regulations, and customer satisfaction metrics. Armed with this wealth of information, these platforms had the potential to offer astute recommendations tailored to individual preferences and needs. However, it was clear that these recommendations hinged on user trust.
Trust was identified as the bedrock for effective AI platform performance. A virtuous cycle was posited, where a user's trust in an AI assistant would lead to greater task delegation and decision-making authority. This, in turn, would furnish the assistant with more data to refine its recommendations, thereby reinforcing user trust. Conversely, any negative experiences could disrupt this trust cycle, underscoring the delicate equilibrium AI platforms needed to maintain.
The concept of relevance emerged as an anchor in establishing and nurturing user trust. AI assistants needed to demonstrate a keen understanding of user needs, delivering precise and user-centric recommendations. In the realm of brand-customer dynamics, these AI platforms were envisioned as friendly advisors, proposing products and services aligned with individual preferences. The potency of their relevancy assessments was projected to exert substantial influence on brand preferences, potentially triggering significant shifts in revenue.
Navigating the delicate equilibrium between user satisfaction and avoiding conflicts of interest was underscored as pivotal. This balance was essential to ensure these platforms were perceived as impartial guides rather than self-serving entities. Moreover, the article anticipated the extension of AI assistants into personal and professional realms. In personal life, these platforms could manage relationships, recall important dates, and even suggest fitting gifts. In the professional domain, they could aid in career management, pinpointing job opportunities that matched users' skills and ambitions.
The overarching thread of the article underscored AI platforms' potential to revolutionize consumer experiences and redefine business-customer relationships. By offering profound personalization and optimization in daily life, they held the potential to greatly amplify user satisfaction. However, the effectiveness of these platforms hinged on their consistent delivery of relevant, trustworthy, and user-centered recommendations.
Key concepts from the 2018 article remain relevant in the present landscape. Nevertheless, it's worth noting that the unfolding trajectory has taken unforeseen turns. While my earlier focus centered on recommender systems, it's captivating to acknowledge that Generative AI has ascended as the prevailing force commanding global attention. Even as I hinted at the potential of Generative Adversarial Networks (GANs) to reverse-engineer assistant recommenders, the extraordinary efficiency of Natural Language AI within such a brief span exceeds even my own projections.
The Current Landscape: Harnessing the Potential of Large Language Models
While many have experimented with Large Language Models (LLMs) like ChatGPT and analogous potent chatbots, only a few have unlocked the authentic potential of LLMs. Through the utilization of underlying APIs and advanced creative techniques, such as “chain of thoughts,” “tree of thoughts,” and recursive agents, the full potential of Generative AI can be unleashed. This particularly holds true when AI is granted access to external data sources such as documents, APIs, or the internet via specialized agents.
In the past, structured data held power. However, LLMs are steering us toward a new era where the prevalent unstructured data which is the norm in our societies, will not need to be engineered and transformed to structure data to become useful. Imagine a situation where an AI creates a Python program and organizes data in a database to solve a problem, without making anyone uncomfortable, nor having anyone even care what the AI is doing.
These new techniques are likely to make companies change their entire business models. LLMs are no longer confined to crafting impressive text, answering emails, or refining sentences while adjusting their tone. Now, AI can scrutinize a company's data, seek additional information to accomplish tasks, generate necessary code, or even design the required user interface to present the information. This transformative potential is on the brink of reshaping industries in unparalleled ways.
Illustrative Example: The Evolution of a “Low-Code” Interface
Over the past five years, we've meticulously constructed our own user-friendly “low-code” interface. This interface empowers individuals without technical backgrounds to fabricate what we refer to as “computable contracts.” Given a situation, or a context, these digital tween of legal documents can be computed. Our technological framework incorporates three pivotal components: a user interface tailored for non-technical users, a data structure mirroring the legal document, and an interpreter capable of comprehending and executing computations based on this data structure. This design, dubbed the Legal Specification Protocol, draws from a paper penned by Stanford Codex.
In insurance, a calculation means figuring out if something is covered and if it is, how much. While the path to this achievement wasn't straightforward, we've attained comparable outcomes solely employing Large Language Models (LLMs). This paradigm shift has led us to deeply reconsider the essence of our application and the heart of our business. This journey has prompted probing questions: Is there still a need to further develop our interpreter when LLMs are proficient in reasoning? Is the user interface indispensable for modeling, given that raw text is sufficient for LLM comprehension? This transformation offers us a fresh perspective on our approach and priorities.
Leveraging the Capabilities of Large Language Models
In our daily operations, we harness the remarkable capabilities of Large Language Models to accomplish an array of critical tasks. Once upon a time, we dedicated a project aimed at extracting text from PDFs. The rapid advancements in technology, however, rendered this approach obsolete. Our reliance has now shifted to Large Language Models for seamless text extraction.
We've also recognized that Large Language Models possess the potential to not only answer whether a given situation is covered, but also to generate our computable data structure—a process that accelerates the automatic modeling of legal documents. This data structure not only provides us with a layer of transparency and explainability, facilitating comprehension and communication of the rationale behind our computations, but also addresses a crucial concern: the inherent skepticism surrounding AI outputs. This is particularly critical for companies with substantial revenues, where even a 1% margin of error might be untenable.
I believe that our current approach is but a temporary fix. If we draw a parallel between AI and a human lawyer, it's impossible to elucidate the human's reasoning. The lawyer might merely state that a situation is covered due to a specific policy paragraph. Another lawyer could argue the contrary, and a judge might ultimately decide what prevails. Yet, in this process, no one is obligated to elucidate which neuron fired and why the initial lawyers chose the specific paragraphs. Consequently, I postulate that it suffices for AI to declare that a situation is covered based on a particular section of the policy. When humans will see that the cited text is correct and really applies to the given situations, trust will follow. We're well aware that this is already feasible: our tests and prototypes have demonstrated it.
Presently, we're capable of generating our computable data structure, providing us with a degree of business continuity, as our integrations remain valid. With the present performance of LLMs, we're equipped to handle numerous requests more efficiently and cost-effectively using our existing solution. However, a time will come when we'll need to shift our operational paradigm, and we're actively preparing for that inevitability.
Efficiency and Innovation Fueled by Large Language Models
In our quest for efficiency and innovation, we've harnessed the formidable capabilities of Large Language Models (LLMs) to revolutionize our approach to coding, infrastructure development, and documentation. The transformative potential of LLMs extends beyond their traditional applications, ushering in breakthroughs across multiple domains.
A particularly remarkable transformation is the acceleration of the coding process. No longer constrained by familiar programming languages, we've found in LLMs a powerful tool for testing and experimenting with various programming languages. Tasks that once demanded hours of poring over official documentation can now be achieved with a few precisely crafted queries to the AI. This expeditious language acquisition empowers us to broaden our skill set and tackle programming challenges with newfound versatility.
Moreover, LLMs prove to be invaluable allies in code optimization and debugging. Armed with the ability to generate solutions for intricate coding problems, we can swiftly and accurately refine existing code. This dynamic approach not only saves time but also guarantees heightened precision in our codebase. The insights and alternative perspectives offered by AI open new avenues for creativity, cultivating a deeper comprehension of well-established languages and techniques.
LLMs also shine when it comes to generating the code underpinning our infrastructure. What was once a time-consuming task involving meticulous coding and intricate design has now been streamlined through AI-powered automation. LLMs facilitate the generation of intricate code necessary to construct a resilient foundation for our applications. This newfound efficiency empowers us to concentrate more on conceptualizing and strategically planning our infrastructure, fostering innovation and agility in our development endeavors.
Documentation and architecture diagrams serve as cornerstones for well-structured projects. Here too, LLMs make their mark by crafting comprehensive and lucid documentation that encapsulates the intricacies of our projects. Complex technical concepts are distilled into accessible language, benefiting both our team members and stakeholders in understanding the subtleties of our work.
Furthermore, architecture diagrams, often requiring meticulous thought and design, now materialize effortlessly under the guidance of LLMs. These diagrams serve as visual blueprints, aiding in conveying the high-level structure of our projects, enriching collaboration and comprehension.
The Power of Purposeful Guidance: Navigating AI's Potential
The successful integration of large language models into our engineering endeavors hinges on our firm grasp of desired objectives and the precise methods we intend to employ to achieve them. This empowered approach enables us to navigate the nuances of AI guidance, ensuring the generation of precise outputs that align with our needs. With clear objectives in sight, we can furnish AI models with the requisite instructions, constraints, and guidelines to yield desired outcomes.
This strategic approach to AI guidance is not exclusive to our team or domain. Currently, anyone equipped with robust knowledge or expertise in any field can assume the role of AI guide, verifying outputs, correcting inaccuracies, and achieving exceptional results. This phenomenon is evident in the substantial investments by companies like Netflix and HBO in AI and data scientists, which in turn has triggered discussions similar to the writers' strike in the entertainment industry. AI's transformative impact extends across all domains, whether it be customer service, risk management in financial institutions, and beyond.
The Future Unveiled: Navigating the Complex Landscape
The evolution of AI carries with it a cascade of implications that stretch into the future. As AI unfolds, its influence on jobs, warfare, and our very way of life becomes increasingly apparent.
The transformative impact of artificial intelligence on employment
The adoption of Large Language Models carries the potential for job displacement, with estimates suggesting that around 30% of white-collar positions could be at risk. These models have the capacity to automate tasks that were traditionally executed by humans, potentially affecting roles, for example, but not limited to, in data analysis, programming, content creation, marketing, and customer service.
Historically, technological progress brought both challenges and new prospects, with workers adapting and upskilling to embrace emerging roles. Yet, the AI revolution introduces novel dynamics. Many manual jobs have already succumbed to automation, and now, white-collar positions are poised for transformation. Upskilling in the context of the AI revolution might be feasible for a select few, centered around highly technical roles demanding advanced degrees and specialized knowledge.
In a commencement speech at Nanyang Technical University in Singapore, Jensen Huang underscores the forthcoming upheaval in the job market due to AI's ascent. While certain jobs may vanish or evolve due to automation, AI simultaneously paves the way for fresh industries and job opportunities. Huang spotlights roles like prompt engineers, AI safety engineers, AI factory operations personnel, data engineers, and more as emblematic of the emerging jobs in the AI-driven landscape. He dispels the notion that AI will outright seize jobs, instead emphasizing that it will be those who wield AI that shape the job landscape. In Huang's analogy, individuals are either running to secure sustenance or running to avoid being consumed by the technological tide. His advice? Run toward what you aspire to achieve.
However, the landscape isn't uniform. Consider the role of prompt engineering, which is undergoing a transformative evolution even before becoming a widespread role in the industry. AI models like AutoGPT and BabyAGI are utilizing novel techniques that confer long-term memory and internet access to LLMs, allowing them to recursively devise tasks and subtasks to achieve goals while dynamically generating the necessary prompts along the way.
Akin to prompt engineering, data analysis faces a paradigm shift. GPT-4, equipped with the plugin Code Interpreter, can decipher data, create explanatory graphs, infer insights, and provide data interpretations.
In the future, the value of products and services could lie in their uniqueness, meticulously crafted by humans for humans. This might herald a resurgence of artisans and specialized workshops, providing an antidote to the homogenization inherent in mass production.
The advent of a “made by a human” label could even emerge, distinguishing these personalized creations. Regardless, human interaction will remain an indispensable element, setting us apart from machines and technologies.
Unveiling the Security Dilemma: AI in the Theatre of War
As we peer into the future, the landscape is fraught with scenarios ranging from the age of hyperintelligence to transhumanism and, conversely, the potential for humanity's extinction. Amid this uncertainty, one undeniable aspect is the weaponization of AI, posing grave security risks at personal and societal levels.
Criminals can exploit AI for nefarious ends, while nations engage in a relentless AI arms race, as exemplified by the intense competition between China and the United States. In regions like Taiwan, tensions may be rooted in the quest for dominance over silicon and its related technological advancements, which could lead to full-blown conflict.
Despite the fact that a conflict, whether between humans or, as we shall observe, between humans and artificial intelligence, does not commence, Generative AI is a potent instrument to fuel fifth-generation warfare, a conflict that employs misinformation and is centered around ideas and narratives. In essence, it is possible that this situation is already a fully-fledged conflict unfolding before our eyes.
In an illuminating interview, Mo Gawdat, the former Chief Business Officer for Google X, delves into AI's implications and potential ramifications. One salient point he underscores is the issue of security risk. Gawdat underscores the urgency of harnessing advanced AI systems to defend our digital infrastructure and counter potential threats. In an interconnected world, AI's role in security is pivotal.
Gawdat transcends immediate concerns to assert that AI's impact extends beyond even critical issues like climate change. According to him, we stand on the brink of an era where AI holds the power to drastically reshape our lives within the next two years.
This assertion emphasizes the urgency of comprehending and preparing for AI's transformative influence. It serves as a clarion call to individuals, organizations, and governments to grapple with the rapid advancement and integration of AI into diverse facets of society.
Within the broader global context, Gawdat's perspective is grim, prompting consideration about bringing children into a world fraught with challenges. His viewpoint reflects the convergence of multiple crises: economic instability, warfare, pandemics, and natural calamities. Viewing these factors collectively, he paints a portrait of a world struggling with a perfect storm of difficulties.
In contrast to Gawdat's pessimistic view of the near future, we find divergent perspectives from futurists like Ray Kurzweil and James Lovelock, who envision a future where humans merge with machines.
Ray Kurzweil, a renowned futurist and inventor, advocates for transhumanism, a vision where technological advancements, particularly in AI and nanotechnology, enable humans to transcend biological limitations and seamlessly integrate with machines. Kurzweil predicts that this convergence of human and machine intelligence will unlock unparalleled knowledge, creativity, and longevity.
Similarly, the Novacene concept presented by James Lovelock portrays an era where artificial intelligence and cyborgs take center stage. In this envisioned era, cyborgs, a harmonious amalgamation of human and machine intelligence, assume dominion. Lovelock envisions a cooperative relationship between humans and cyborgs, where the latter enhances human capabilities and intelligence. This epochal shift compels us to reconsider the nature of humanity and speculate on the interplay between biology and technology.
At Novacene's core lies the notion that Earth itself is a super organism. Lovelock's vision contends that, with the aid of cyborg intelligence, this planetary super organism will continue to evolve and flourish. The integration of AI and cyborgs into this super organism introduces intriguing possibilities for Earth's ongoing development. As cyborg intelligence intertwines with natural systems, the potential for novel insights and further advancement becomes tantalizing.
Novacene spurs us to probe the broader repercussions of melding biology and technology. It urges us to examine how this fusion may shape our existence and redefine what it means to be human. This concept propels us to contemplate the profound shifts that may lie ahead.
The opposing viewpoint could be depicted by the genesis of two factions outlined in Hugo de Garis's book “The Artilect War: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines.”
Within these pages, de Garis delves into a heated debate surrounding the creation of Artilects—profoundly intelligent machines akin to gods. The book delves into the technological implications of such advanced AI systems, exploring the ethical, societal, and existential challenges they pose.
The narratives within the book encompass a range of perspectives from experts and scholars, igniting fierce debates and controversies among readers. Through these dialogues, de Garis brings forth significant concerns about potential extinction risks.
The book scrutinizes the intricate dynamics between Artilects and humanity, delving into their potential impact on labor markets and power structures. It raises compelling questions about humanity's future in the face of transformative technology, and encourages a thoughtful exploration of the delicate balance between progress and human preservation.
From my perspective, the pivotal aspect of the book is that as the Artilects advance, humanity will be divided into two distinct factions. Some individuals embrace AI and embrace advances such as transhumanism, while others oppose it, resulting in a physical conflict. De Garis even speculated that the faction opposing the AI would start killing scientists developing AI, and that this would be one of the first signs of the coming war.
The Future Awaits: Navigating Complexity with Prudence
The path towards an AI-driven future is marked by both opportunities and challenges. As the advancement of AI continues, its impact on employment, warfare, and our societal fabric will have a significant impact on the trajectory of humanity.
The progression of employment is a significant aspect. The implementation of Large Language Models holds the potential to alter the employment landscape, with certain occupations at risk of disappearing. While history has seen technological progress usher in change, the AI revolution introduces distinct dynamics. With automation already impacting manual jobs, the upheaval now extends to white-collar positions. Once a viable solution, the concept of upskilling has taken on a new form, focusing on highly specialized roles. Within this landscape, new industries will emerge, while artisanal craftsmanship may experience a revival as the value of personalized creations gains prominence.
On the horizon, the theater of war takes on a new look. The weaponization of AI poses security risks both on an individual and societal level. Criminals and nations alike can exploit AI for malicious purposes, with regions like Taiwan standing as examples of potential battlegrounds.
The utilization of artificial intelligence by humans to gain a strategic advantage is a distinct matter. However, we must also consider what will happen when AI becomes super-intelligent. Will the AI see us as we would ants? The AI would not be concerned by us unless we are seen as a parasite entering the house in early spring. However, we are unconcerned about the activities of ants in the forest as long as they do not cause us any inconvenience.
Will the AI be interested in working with us? Or is the artificial intelligence going to act in our best interests? Nothing could be more uncertain. This is the reason scientists and even Open AI are engaged in the task of addressing the AI alignment issue. The objective is to ensure that the artificial intelligence does not cause us harm, either accidentally or intentionally.
With AI's transformative power, humanity faces fundamental choices. Are we going to create godlike intelligent machines, and what would be the profound implications of such a decision? There are ethical dilemmas inherent in the creation of advanced artificial intelligence, including extinction risks and societal restructuring.
I believe it is imperative to approach AIs potential with utmost caution and discernment. As the capabilities of AI continue to evolve, it is imperative that society harness its potential while simultaneously mitigating the associated risks. The evident interplay between the challenges and opportunities is undeniable, and the response of humanity will ultimately determine the trajectory of AIs impact on our global landscape.
Additional References
Continue your learning on the fascinating topic of AI in these posts that have helped inspire my thinking:
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Streaming Giant Netflix Lists AI Job at $900,000 as Writer Strike Continues
Netflix invests more than half a million dollars in machine learning role amid SAG, WGA strikes