Ethics & A.I.

Ethics & A.I.

Watch this video about the first chatbot created nearly 50 years ago. A woman sits at a black computer screen with a blinking curser. Green text scrolls across the screen as she writes, "men are all the same." The program responds, "in what way?"

ELIZA was a natural language processing program created by Joseph Weizenbaum that "pattern matched," meaning that when one asked a question, the program rummaged for a suitable match. The video in the YouTube link is edited to create the illusion that one person speaks and another responds evenly and empathetically. Like Harlow's Monkey Love Experiments that preceded it—the one with the infant chimpanzees cozying up to wire figurines covered in a blanket—users opened up to the program, although no human existed on the other side. There’s no empathy, only a computer program.

On the ethics of artificial intelligence, how can we think about “paradigm-shifting technologies” in the true Kuhnian sense of the concept? A chatbot technology from the 1960s may show that contemporary conversations about tech-fueled alienation are not new. Or it may demonstrate that anthropomorphized computers will continue to be a problem. In fact, two researchers studied the degree to which we mindlessly treat computers as social actors. On the level of intuition, the recent leaps forward in generative A.I. systems feels different.

But like all new technologies, the dialogue is polarized. Charles Baudelaire, the misanthropic poet, once called the rise of photography a "disease of imbeciles." He wrote, "as the photographic industry was the refuge of every would-be painter, every painter too ill-endowed or too lazy to complete his studies, this universal infatuation bore not only the mark of a blindness, an imbecility, but a whiff of vengeance." Having wrote the previous passage in the 19th century, there’s only speculation as to what he’d say about Instagram—or what he'd say about chatbots writing the paper of grade-school kid.

To reference Baudelaire is to show that all new technologies cue retroactivism hellbent on curbing optimism. Name a new technology and doomsayers are on the frontline.

Months after the release of ChatGPT and Midjourney, 1,000 experts on artificial intelligence called for a hiatus, a pause to deliberate the long-term implications for not only businesses—but our global society. Caution is a newcomer, compared to the internet’s early days of relentless optimism for future utopias, democratization, knowledge-sharing, and unity.

There's a key difference, however, in the release of the two technologies. Internet infrastructure may have had it's unknowns, but the true unknown was—and still is—use cases. Optimists created a network for sharing knowledge with people across the world, and there are many who use it for purposes that create better institutions.

Others use it to accelerate black market deals, spread disinformation or conspiracy theories. And yet more people passively consume content—like social media—leading to the sharp rise in mental health issues over the past two decades. The technology itself needs assessment—and so does the way it will be used. Both are unpredictable. Driven by the user, use of the internet has warped and transmuted, and it may not be different with artificial intelligence—where the larger unknown remains the technology itself.

Take a step a back to see the running list of weird scenarios with no ready-at-hand ethical script. The pope in a puffer coat? Or what about the soundtrack to Spring' 23—the AI-generated Drake/Weeknd song, repeatedly taken down from YouTube, Soundcloud and Spotify—just to sprout up again like a viral weed. Walk city streets and you're bound to hear from somewhere the thumpy piano and the hooky bar: I came in with my ex like Selena to flex.. (The NYTimes was worried about Black appropriation; Meek Mill called the song "flame.")

Each ethical quandary raised by artificial intelligence—both the big and the small—involves ambiguity as a common theme. Many will talk about the racist regurgitations of chatbots or the racist algorithms used in judicial sentencing. The creative community will discuss copywrite infringement, appropriation, and potential joblessness. And it's all relevant, all at once.

What does it matter if the technology was more imperfect yesterday? What does it matter if today we can't explain if the more perfect technology of today still has biases? What does it matter if the technology has biases if we can’t understand where those biases come from? What do biases matter if the technology uproots 80% of jobs? (80% of jobs have at least one task susceptible to replacement by Chat GPT.)  Does it matter if the technology replaces our jobs if it also carries the risk of wiping out humankind? What does it matter if humankind is obliterated if there's a higher intelligence to see existence into the future?

Cascading, this is a chain of ethical questions endemic to the rise of a new technology.


i.                What do we know about it?


According to Pew Research, only about two out of every three adults can correctly identify the use of artificial intelligence in everyday technologies—like wearables, online product recommendations, and customer service chatbots. Merriam-Webster defines artificial intelligence as "a branch of computer science dealing with the simulation of intelligent behavior in computers." Beyond a working definition of A.I., the average American has a limited understanding of A.I.'s impact on technologies.

The issue with the average consumer is twofold. Firstly, it’s hard to identify A.I.-infused products, and secondly, we don’t pay attention at all to the science itself. Two years ago, CNBC reported that Google’s DeepMind underwent a shift from games to hard science, and subsequently dropped considerably in attention and overall news content generated from the group’s work.

Generative A.I. systems represent only a fraction of the A.I. systems in use today, and because it happened to make a recent technological leap forward it also grabbed headlines. We can already see that its users are searching for the easiest-available heuristics to understand its uses—and consequences. For better and for worse.

The following plays on anecdotes but says something about current discourse. Note the prevalence of comments that use this logic:


-       I asked ChatGPT to do X

-       Rather than X, it generated something ridiculous (Y)

-       Given (Y), A.I. is terrible.

ChatGPT has reached over 100 million users, and queries range from the mundane to the weird. Novelists ask ChatGPT to generate a short bio, and worried citizens ask the generator to comment on the craziness of a politician. Some ask ChatGPT to talk about the personality of their cat. Sometimes the answer makes sense, and sometimes it doesn't—or it theoretically can’t.

Some also pose innocuous questions with the intention (and success) of finding answers that relate biases at the forefront of today's zeitgeist. The flashpoint of generative A.I. systems may have pushed A.I. into kitchen-table conversations (alongside raised consciousness on various other issues)—but has it given the general public any more profound understanding of how it works?

If A.I. Generators give us the wrong answers—or answers that we normatively don't like—is that reason enough to put the technology on hold altogether?

Simply put, ChatGPT is a large language model. The GPT stands for "Generative Pretrained Transformer," indicating that the program converts text to numerical values that it can then use to predict what sequentially may come next.

Kevin Roose, a technology reporter for the New York Times, provides an easy six-step process for how it works. First, the technology needs a goal, like predicting the next step in a chess match or predicting the three-dimensional shapes of proteins. Second, the model must have data. ChatGPT was trained on Wikipedia pages, Reddit pages, and other internet sources. But the information must be "tokenized" or broken down into digestible components for the model. Third, the program needs a neural network, or a system of interconnected nodes that stores information. Then the neural network must be trained, developing pattern recognition, parameters, and contexts. And through these, the neural network becomes a map. Next, tuning reinforces the program to improve accuracy.

Lastly, the program launches.

It may seem like the technology can be integrated into our work processes tomorrow. But it can't. From electrification to information technology, extraordinary inventions and innovations tend to be followed not by a boom but a bust. But why is that? Peter Drucker's book on management, Age of Discontinuity, Krugman notes in an essay about the delayed impact of technological innovations, is aptly named because technological disruptions for business are more about the status quo coming to a halt, like a long-hauling locomotive, than it is about an overnight shift to embrace something new. The economist predicts that while artificial intelligence rapidly progresses, actual economic manifestations remain years away.

It's hard to grapple with the speed that Chat GPT exploded on the business and cultural scenes. The global market cap of artificial intelligence reached over $100 billion in 2022, and Research suggests growth to about $1.6 billion by 2030. The financial incentive means no one will slow down. In fact, thinking ethically about the implications of algorithmic bias, the "agreement problem," the black box of A.I. decision-making, among other things, is disincentivized.

Chat GPT 3 scored a 60% on the LSAT, while the latest version scored in the 88% percentile. In an interview with the New Yorker, songwriter Nick Cave responded to the gravity of generated content, "for better or for worse, we are inextricably immersed in A.I. It is more a kind of sad, disappointed feeling that there are smart people out there that actually think the artistic act is so mundane that it can be replicated by a machine."

Nick Cave wrote that excerpt months before Ghostwriter released the viral AI-generated Drake/Weeknd song. With the speed of things advancing, the replicable parts of artists' work may feel clunky, but it won't soon. Popular music has followed the same formula for decades. If AI-generated songwriting mimicked this formula—would we really care? A musician may call out the instrumentation or time signature of a great song. And a novelist may point out the syntax and cadence of a great sentence. But what about the rest of us—if we can't tell, do we even need expert opinion?

A.I. has already infiltrated writing with assistants like Grammarly that suggest grammar and spelling changes and improvements for clarity. The mundane task of rephrasing language or inserting the missing comma will be first on the chopping block. While there may be technocratic loss by deferring to A.I., there are broader concerns for both business and the artistic community.

Those with a fine arts degree may become the first chair in a big-city philharmonic, while others with aspirations of the bestsellers list become copyeditors and copywriters (while, potentially, making more money than the first chair or bestselling author). And as a copywriter, they may talk about the timeless slogans: Nike's Just Do It, or the New York Times' All the news that's fit to print. An A.I. algorithm may not have been able to provide such a slogan, but aside from notable exceptions, most copywriting is paint-by-numbers, and overrun by bromides. The "innovative solutions," "game changers," and a lot of things dubbed the "next big thing”—if these words mark where most humans are capable, would an A.I. alternative be so bad?

The movement toward A.I. may overlook the interstices between the poor stay of Don-Draper-esque work and the new existential question of writerly style. Herself one of the great stylists in today’s writing (especially essays), Patricia Lockwood reviews DFW’s “new book,” writing that, “His entire personality is present in the word ‘supposedly.’ Style itself is the tendency toward the turn of phrase, a purposeful distortion, knowledge of and then dismantling of convention, simple (or complex) quirks, etc. Consider this passage by Clarice Lispecter, “I write for nothing and for no one. Anyone who reads me does so at his own risk. I don’t make literature: I simply live in the passing of time. The act of writing is the inevitable result of my being alive. I lost sight of myself so long ago that I’m hesitant to try to find myself. I’m afraid to begin. Existing sometimes gives me heart palpitations. I’m so afraid to be me. I’m so dangerous.”

What would we say about a machine that bends toward such a milieu? Because if we can’t separate human from writing inflection, is it possible to separate machine from its output? In a way, we already do so when we criticize generated responses. The belief that DFW, as a person, was “not good,” according to not only Lockwood but most of the writing community, positions his work with historicity.  And maybe the same belief about underpinning algorithms will do the same for fiction, or any writing for that matter.

For a more weighty issue, others have spoken about the bias of training data that feed machine learning algorithms. Two researchers from a university in India spoke to the various factors that can influence the efficacy of facial recognition technology. Intrinsic factors include facial expression, plastic surgery, and aging, while extrinsic factors include low resolution, noise, and occlusion. However, writing for the journal, Computers, James Coe and Mustafa Atay found that with diversified training data, training models will improve.

So to go back to the earlier example of:


-       I asked ChatGPT to do X

-       The program did not generate X

-       The program generated something else (Y)

-       Given (Y), A.I. is terrible.

Generated (Y)’s will be less wrong tomorrow, and even less so the day after that. If we overly focus on today's gripes with generated answers we're falling to the fallacy of presentism.

At the pace that artificial intelligence is progressing, it will replicate human creativity. But in a different industry, in the world of design and architecture, it's the first steps of imagination that are being replaced. In the design and architecture community, artificial intelligence is already making a splash. Midjourney has an Instagram page devoted to generated images. Many look like they're the cover art of speculative fiction, or the amorphous curvature of Dr. Seuss blending with what designers call "biophilic design." 

A Dezeen article argues for optimism, implying that new A.I. technologies will bring greater autonomy and greater efficiency. The early phases of design may be weighed down with maneuvering for the right aspirations—too many iterations on renderings for what will eventually be. It still requires, the article suggests, an experienced designer to curate and contextualize. But behind the task, A.I. excises skills currently in the wheelhouse of today's designers and architects.

But here, we're still talking about the present, as if the great acceleration of generative artificial intelligence over the past few months won't continue. At this rate, A.I. will be able to generate, curate, and contextualize without the designer soon.

Some say that artificial intelligence is merely a reflection of us, albeit a smart one, generated in a minute or two. Consider how we look down on historical figures—he was a product of his time. Or, think about how we lionize others—she was ahead of her time. This metaphor is to pose the question: can A.I. become more than a reflection? Can it generate something that is more than a sum of what's already out there? Existentialism refracted the post-war, atomic age. Cubism refracted the era of post-photography. What will A.I. refract?

What would it mean to be counter-cultural, counter-revolutionary, or, counter-anything, if algorithms simply pull from the past, repeating the past in a different form? Is it possible to have creativity if the term is defined as judgements, decision-making, or thinking broadly that becomes more than a sum of its inputs? So if mass adoption of A.I. systems are exercises in free association pulling from the past, we may be heading toward mass stagnation, or a hiatus of innovative thought. There will be no counter culture, or in business-speak, “innovation”—only an appropriation of things that had once been radical. Will it be possible to reclaim, as Dubuffet once called, “scorned values?” We will wear the past’s thoughts like a recycled coat from the vintage shop. The circularity of trending will develop new meaning.

 ii.   The Alignment Problem & Other Big Issues

If an A.I. system is designed to do one thing (X) and produces a different thing (Y), there is a misalignment. If an A.I system is designed to do one thing (X) and produces the intended thing (X), but also produced an unintended thing, there is also a misalignment. It’s the latter case that tends to have greater consequences in the development of A.I. systems.

Consider the following hypothetical. A researcher wants to eliminate pedestrian fatalities. They read about the frustrating legacy of New York City’s Mission Zero, so they then create an A.I. system with the intended output of pedestrian fatalities. Put simply, they’re input is an urban design framework that will reach their goal. Looking at their urban system, here is what the researcher finds: no bike paths. No crosswalks. No sidewalks. Because these are the locations where pedestrian fatalities occur, the A.I. system removed them from the proposed urban system.

No dangerous locations, no fatalities. The system had designed an urban framework with the intended goal of reaching zero pedestrian fatalities by simply eliminating the prospect of having pedestrians altogether.

This is one example of the alignment problem, or the difficulty matching A.I. outputs with human values. Brian Christian's book about the topic explains the black-box and unintended consequences of A.I. algorithms, which have occasionally reached disastrous effect.

Cue the famous example of COMPAS, the software developed by Northpointe to predict the recidivism rates of criminal defendants. Judicial decision-making is notoriously flawed by extra-judicial variable, that is, any variable influencing sentencing but the case itself. So Northpointe developed the technology to be an algorithmic alternative to judicial decision-making. Propublica performed an extensive audit, finding that the algorithms were arbitrarily discriminating against Black people. Using data provided by COMPAS, ProPublica ran logistic modeling to find variables with true predictive power—race was not one of them, but the machine learning technology used it anyway. Black defendants were therefore more likely to be incorrectly identified as at higher risk for recidivism.

But that wasn't the whole story. In article published in the UC Davis Law Review, Melissa Hamilton, a Senior Lecturer of Law & Criminal Justice at the University of Surrey addresses the discursive fight and the statistical ambiguities as emphasized by ProPublica and then defended by NorthPointe, the creator of COMPAS. The discourse cuts to the center of what algorithmic fairness means. Should it mean racial statistical parity, or equal likelihood in being deemed likely to reoffend by race? Or should it mean pairing prediction with outcome, regardless of the racial outcomes? NorthPointe countered ProPublica's findings by highlighting that White and Black defendants were relatively equal in the model's predictive accuracy.

However, as Melissa Hamilton pointed out, there were unequal results among positive and negative cases. That is, the model was more inaccurate for Black individuals with the negative case—people deemed likely to reoffend but did not reoffend after two years. The COMPAS model was equally likely to be accurate for white and black defendants—but it was wrong in significantly different ways.

No matter how we construe algorithmic fairness, there will be consequences. Unless we predict minds like that of the novel and film Minority Report, algorithms and protocols will unfairly lock of innocent people. Pendulum swing in the other direction to find an outsized number of victims of crimes that could have been prevented.

Either way, who or what would you rather be judged by—a person or a bot? ELIZA, the MIT chatbot of the early 1960s, may show that it does not matter case specifically, but it does matter in the abstract. In a future where the bot may outperform human judgment (as of now, it's about the same, depending on the task), we should all ask ourselves whether we'd rather face a human or a non-human. Hamilton ends her piece with an ambiguous tone, "algorithmic risk assessment tools, no matter how progressive, scientifically-informed, and algorithmically-sophisticated they may be, can still result in disparate impact. Hence, as civil rights groups and data scientists have recently warned, care must be taken with their use."

In effect, the example of NorthPointe and ProPublica illustrates a fight over desirable outcomes. Which is to then ask: how do we construe multiple, competing outcomes?

The stochastic parrot, created by Emily Bender, a linguist at Stanford, is the idea that natural language processing systems are simply creating an output, derived from regurgitated probabilistic statistics. The Bender Rule was created to call attention to the fact that many of those working in artificial intelligence don't name the language they're working with. Most use English, but the assumption is a hegemonic slight. Bender also contests the term “artificial intelligence,” which she believes has root in white supremacist culture. She prefers "Systematic Approaches to Learning Algorithms and Machine Inferences," a phrase offered by a member of Italian Parliament. How smart is SALAMI, really?

 But the problem may not be as straightforward as acknowledging or eliminating inputs from algorithmic models. The term "redundant encodings" means that even if we remove an input—like race or gender—models can recreate racial or gendered biases through an amalgamation of other variables. For example, in "evaluating" resumes, machine learning will pick up on racial or gendered differences in preferences for coursework in college, extracurriculars, and work history, and in effect, the variable of race or gender is a redundancy of other variables.

The act of evaluation is nebulous, insofar as there is the speculative yoking to any ontological endgame i.e. who is the best candidate for a job and what makes one? The typical red-herring exercise calls attention to the biases endemic to answering those questions, algorithmically or otherwise. The ideal exercise is setting explicit definitions. But this is nearly impossible to do. If we can't define our outcomes, how can we possibly expect A.I. outcomes to align with our values? Issues of discrimination may be timely but also consider the difficulty in setting outcomes for other social and political issues.

Take the example of public housing and crime. The utilitarian calculus of providing housing to low-income residents that need it (a net good) versus the system maladies endemic to large concentrations of public housing units (a net bad). Can artificial intelligence provide a way forward under the context of competing needs? Is artificial intelligence capable of finding the balance?

Or what about carbon emissions and progress in developing nations? Research shows that developing nations are more likely to use "dirty" sources of energy in pursuit of greater economic—and therefore social—prosperity. Granted the largest polluters are advanced industrialized nations, but still, how would A.I. advise on balancing bringing a nation out of poverty with the competing goal of tampering global carbon emissions?

What about another thorny issue—one related to design and architecture? The American Disabilities Act of 1990 legislated that all new construction must be built with standards for accessibility. After more than 30 years, less than half of sidewalks are accessible for people with disabilities, and the majority of buildings are not ADA-compliant. Artificial intelligence may help us create innovative ways to deliver accessible buildings. But what about the expenses to do so? The NYTimes reported that the MTA is pledging $5 billion (with a 'B') to build elevators for subways to render them accessible. What would A.I. advise us to do given the competing needs of cities?

Even if it could—imagine budget hearings spent reviewing algorithm rather than spent in heated debate over the right thing to do. Is that the future reality we want? The previous examples show the implications for ethical dilemmas and our pursuit of setting the right outcomes. But these are rich philosophical issues, some of which—the definitions of "the good.” For example,  algorithmic bias may have a problem with semantic indeterminacy, a sub-issue of philosophical vagueness where because of the generalization, an assertion is neither true or untrue. The White House recently issued the Blueprint for an AI Bill of Rights which says that with "important progress must not come at the price of civil rights or democratic values." So while we may not have a right to, we may at least have rights from the advancement of A.I. Will we arrive at a point where different stakeholders (perhaps from different cultures) tune A.I. tools by a hierarchy of values, deliberated by a function of local democratic inputs? Or will we build consensus?

iii.              How'd you get that answer?

As imagined by Pixar, the lovable robot WALL-E exists on a post-apocalyptic Earth, picking up trash and searching for plant life as their designed purpose. But this purpose is overcome when WALL-E falls in love with another robot, EVA. When we felt genuine sadness at the film's climax—WALL-E is near-fatally injured in trying to save the planet—are we only anthropomorphizing a robot given a handful of human emotions (and dorky features). Theory of mind "investigates children's understanding of people as mental beings, who have beliefs, desires, emotions, and intentions, and whose actions and interactions can be interpreted and explained by taking account of these mental states." The dorky robot may pass the test.

But how will we know when a machine is truly conscious? The hard problem of consciousness distinguishes between the mind and matter—how can the ephemera of consciousness arise from matter itself? Anything we’ll discern about the autonomy of artificial consciousness will be done so through third-party judgment; we can never understand robotic subjectivity from the first person. So what is robotic self-awareness? How can we ever know whether an A.I. system is aware of its own decision-making?

WALL-E is a surprisingly adept example to consider for true A.I. consciousness. The film is about human-kind's exodus into space after climate extinction, but it's also about human subservience to machines. Needs like human flourishing have been sublimated to those of machines—their own perseverance. And this is where there's not only dramatic tension, but a tension in how we construe A.I. consciousness. WALL-E, compared to other robots in the film, has a character arc that involves overcoming pre-programed outcomes (picking up trash; finding green life), and making decisions, however flawed.

When we make a big decision—is it the last decision among many decisions in a sequence. Or are we simultaneously considering all decisions up to that point, and making our big decision with all prior decisions all at once? Using neural markers, we can tell when a decision takes place. In the 1980s, scientist Benjamin Libet used EEG electrodes in experiments to demonstrate that our brain "knows" we're about to make decisions (readiness potential) before we're aware of it. Effectively, neural electrical currents precede our awareness, and our decision-making. The implications for free will have been heavily debated. (This has also been used as evidence to support the materialist metaphysical worldview that all phenomena can be explained with the physical world.)

Importantly, a hallmark of human-decision making is its flaws and its feeling of arbitrariness, considering the swaths of unconscious information feeding each decision. The implicit bias test has its kinks (even the authors have admitted such), but the test has invoked a debate around unconscious (or pre-conscious) associations and decision-making. On the other hand, artificial intelligence computes countless considerations simultaneously. Here, considerations are anything but subconscious. The information itself may be biased, but unlike humans, there are no sublimated, unconscious thoughts that may (or may not) have an outsized impact on end judgments.

Johns Rawls' original position was the mindset that individuals assume in creating a just society for everyone. Let's try something similar with the case of artificial intelligence: assume that artificial intelligence has graduated beyond the riddled biases of training modules and internet data—it casts decision-making and judgments with no discernible whiff of biases, at least according to what any human can tell. Is artificial intelligence still something worth pursuing?

We’re still in the shallow end of the pool in terms of our ability to explain human decision-making, rife with its kinks, flaws, and ulterior motivations, despite the ongoing scientific study of decision making in its aggregated forms, by the diagnoses of political science, economics, sociology. But what about A.I.? Are we prepared for not only different underlying motivations, but dark and hidden motivations too complex for human understanding?

For all of our faults as human beings, it's at least comforting to know that we can put our finger on—and then debate—even the most subtle individual, interpersonal, and group biases. We hear this platitude all the time: there's a lot of work to do. But at least, the platitude suggests that our archaeology of the mind and its flaws are progressing. We know the work we have to do, only because we know the dynamics of the work in itself. Perhaps it’s the information overload of the present that we’re informing ourselves to death, or illiterate to the necessary.

We have no reason to believe the stagnation of contextual variables in relation to A.I. With the transfer from pre-internet to post-internet, pre-email to post-email, acceleration did not lead to the honing of new skills. An argument can be posed for the atrophy of both skill and critical cognitive aptitudes, like attention. We also have no reason to believe that our knowledge will continue—but will outsource to A.I. tools.

The untethering from necessary or “actionable” information operates hand-in-hand with an alienation from processes. We’re attracted, in part, to the work of Jackson Pollock because each stroke illicits the blueprint of how he recreated chaos. We romanticize the unfinished Arcades Project of Walter Benjamin because of the unrealized scale and obsession. Say what you will about a guy like Mac Demarco, his One Wayne Album is all over the place, but suggests the random tidbits of the musician’s mind, the opposite of the polished product typically downloaded on Apple Music.

As A.I. systems develop further, who signs off on the je ne sais quoi? And specifically, who signs off on the je ne sais quoi when it comes to creative judgment? Or of ethical judgments? Put differently, intuition may be the last step, the decision, after a long line of logical steps. An infinite regress argument "is a series of appropriately related elements with a first member but no last member, where each element leads to or generates the next in some sense." In one sense, creativity is association when there is no clear association—it's a human derivation to take a set of premises and create something novel with them. In a room full of people tell each to come up with the name of the next object after a sequence of a randomly selected three objects (table, shoe, automobile), and there will be different answers according to the individual. And in another, it's the ephemeral decision point when logic comes to an end.

Compare the holder of specialized knowledge—like medicine—to a decision rendered by artificial intelligence. You not only trust your specific doctor, but you know that there is a whole field of medicine—with varying degrees and specialties and schools inhabited by people. But what if there were no longer people? Decisions were rendered, backed by no proven system of knowledge? Where are the checks and balances? Where do we turn for validation?

What if mankind becomes the butt of William Barrett’s observation:

"No doubt, the medieval man would have produced along with his calculation a rigorous proof of the whole process; it does not matter that the modern man does not know what he is doing, so long as long as he can manipulate abstractions easily and efficiently."

Meaning is hollow without any referent. Generated output is derived without understanding of a real-world object. The emptiness we will become, according to these famous lines from Macbeth:

Life is a tale

Told by an idiot, full of sound and fury,

Signifying nothing.


 

To view or add a comment, sign in

Insights from the community

Explore topics