0

How General and How Comprehensive Must AGI Be?

$0raised
$5,000valuation
Sign in to trade

Project description

I want to submit a full essay to the OP AI Worldviews Contest, and here is my current situation: I have been working on several AI risk and safety essays and I have recently learned about this essay contest, however, it is closing soon (My spouse, Brian Edwards, told me about this competition and I did not realize how good of a fit it was for me until shortly after the semester ended) I have an idea that is not completely formed yet, however, if I am able to focus on editing this project and continuing it, I believe that I can submit an essay to the OP AI Worldview Contest that helpfully informs some of the current thinking on these issues. 

I am currently a Ph.D. candidate in the philosophy department at Indiana University Bloomington, and I have been working on and thinking about a number of philosophical and ethical (primarily risk-related) concerns - presenting two ideas last summer at the PERITIA (an EU-funded project that investigates public trust in expertise) conference in Yerevan, Armenia (Zara Anwarzai, Luke Capek, and Annalise Norling, Expert testimonial failures and trust-building and Annalise Norling & Zara Anwarzai, Taking the ‘Expert’ out of Expert Systems). This conference explores the normative and affective roles in forming decisions to trust, and both of the projects that my colleagues and I presented focus on how such a decision to trust (or to not trust) might impact our risk assessments in deciding to trust various sorts of AI Systems.


What is your track record on similar projects?

I have won essay competitions at both my undergraduate and graduate schools: The John Grant Essay in Bioethics Award (1st place out of 3 awards) and then the Clark Essay Prize for first and second-year graduate students at IU Bloomington. I also lost an essay competition run by Eon on Toby Ord’s book The Precipice, but I obtained helpful feedback following that loss. 

I have also had a lifelong interest in forming arguments and discussing them with others. I have coached and organized ethics bowl teams (the intercollegiate variety, but also hopefully soon the high school and prison versions as well)  for over five years, and love being able to engage with difficult topics and re-evaluate my thinking. While coaching a team, our team won first place in Nationals. 


A Draft in Progress 

Here is what I have so far. Since I obviously have more work to do, I wanted to post it here first to get any feedback that I can: 


How General and How Comprehensive Must Artificial General Intelligence Be?

Annalise Norling 

May 2023


What is the probability that AGI is developed by January 1, 2043?


In this essay, I will attempt an answer to this question. 


First things first, here is the definition of AGI that I will be working with in this essay, provided by Open Philanthropy: “By “AGI” we mean something like ‘AI that can quickly and affordably be trained to perform nearly all economically and strategically valuable tasks at roughly human cost or less.’”


New artificial intelligence capabilities - such as chatbots - are becoming increasingly embedded in our communications as well as the decisions we make with one another. 


We speak of using virtual assistants to schedule meetings, message our friends, and set alarms. Insurance companies report using bots to automate claims and customer dialogue. Physicians use diagnostic algorithms to aid them in their triage and diagnosis of patients. 


And in some of the above instances, it seems that we are directly - or almost directly - engaging and interacting with these intelligent tools, which might further suggest that the algorithm possesses general AGI. But is this right? I think in some senses yes, but at the same time in some senses, no. I’ll explain further below. 


A number of our jobs require the sort of general intelligence that artificial intelligence systems are rapidly outpacing human intelligence in, such as the generation and processing of large amounts of information as well as a sharp and accurate memory of factual details. 


But at the same time, a number of our jobs require the sort of general intelligence that involves the ability to attribute mental states to others. So then, understanding humans well is a crucial part of our general intelligence. Thus any AGI that has the capability to substitute some human worker for nearly all valuable tasks that they can perform must possess a capacity - similarly sophisticated to ours - for attributing mental states to humans. And doing so might also require having an intention, or some sort of willfulness, on the part of each ‘intelligence’ involved to figure out what these mental states really are or what they amount to. 


For example, while in undergrad I worked as a certified nursing assistant (CNA). And for diagnostic purposes, AIs were at times more accurate than physicians in diagnosing certain ailments. But does accuracy at a specific task count as the most weighty or crucial aspect of general intelligence that might be important in our communication and shared decision-making, particularly in a context like healthcare? 


Well, think of the following question: Would you prefer to consult a (more accurate) healthcare diagnostic AI over a human in a time of need? I think the answer to this question clearly depends on what is meant by a ‘time of need’. If my time of need involves crucially a quick diagnosis that involves the assessment of a very large amount of data, then perhaps I would prefer to consult a (more accurate) healthcare diagnostic AI over a human (expert) clinician. But then after that, I would tend to prefer the human clinician. And for many other starting ailments, I would prefer the human clinician as well. And this is not simply because they are human. It is because they are more capable of understanding my mental states (as well as the technical medical information required, or at least enough of it, or perhaps even with the aid of some sort of AI tool) and subsequently providing me with medical advice that takes into account my best interests as a human person. 


And as I mentioned above, this is a crucial aspect of our general intelligence, and this is made especially clear in certain contexts, particularly those contexts that require a sort of relational intelligence. What exactly does this sort of intelligence look like though? 


To start, the human agent clinician can physically present herself, and so can be physically present in the triage or diagnostic conversation. This will enable her to communicate with her patient both verbally and non-verbally, as well as allow her to pick up on both verbal and non-verbal cues or communication she receives from the patient. For example, a human agent physician might notice that a particular patient is acting more stoically than a typical patient would while in distress, and adjust their plans and recommendations accordingly. It is unclear how an AGI would accomplish this soon without being embedded into a human, in which case it is still unclear if it is in fact an AGI (and of course a host of other ethical and risk worries arise which I cannot address in this essay).


The human agent clinician can exhibit genuine expressions of interest in a patient and she can make herself aware of her own perception of the patient, their history, their physical exam, the kind of support they receive from their family and loved ones - and then ask targeted questions based on that perception. This seems in part due to the ability of the human agent to understand that the patient has (human) intentions among their mental states, and that these cannot be entirely decided or dictated by the clinician. 


The human agent clinician can recognize when a patient is actively dying, even if the patient is non-verbal. This is because she has a more nuanced understanding of the needs of patients at the end of life - i.e. they not only focus on extending life but also on the quality of remaining life. A clinician who neglected this would be neglectful in their general understanding of the situation as a whole. 


This brings me to another crucial point: the human agent expert can recognize and in turn value one's perspective as distinct from her own. She can form a sentimental attachment and accumulate lots of anecdotal knowledge about a particular patient throughout the course of the development of their relationship. This puts the human agent expert in a better position to care for one as a fellow human. They can then not only obtain information about the patient’s illness - they can also understand how it might relate to their life and their goals in life. For example, an AGI clinician chatbot may tell you what the best course of treatment for a certain patient is, e.g. to opt for Surgery 1 or Surgery 2, but determining what is truly the overall best thing for the patient is ideally not a task for what is essentially a predictive model, given the value choices necessarily involved in a diagnosis. 


And in a different sort of relational sense, the human agent clinician is capable of relating in a certain way to the family of the patient, their clinic staff and coworkers and the organization in which they work. She can disclose conflicts of interest when appropriate, making her motives appear more reasonable or understandable to the patient. So, perhaps, speaking and making decisions on the basis of lived experience rather than technical aptitude may be important for AGI to be fully realized, at least in some contexts. 


Communication, especially at this level and in a high-risk context, involves a fairly high level of intelligence and skill, especially since it is often times how we say something than what we actually say or how it is interpreted: the way we behave and listen, whether and how we deliver on our promises, how we treat others, and how others perceive our treatment are all aspects of the human agent physician-patient communication. In these ways, an AGI clinician is unable to communicate in the same way that a human agent is able to. How does this impact the ability of artificial intelligence to (quickly, in the next twenty or so years) achieve AGI? 


So then, interpretation involves attributing beliefs, desires, and intentions to the speaker or agent based on both their behavior as well as the linguistic expressions they produce. And comprehending the meaning of an utterance or a mental state or a behavior often involves understanding how it fits within a network of beliefs, desires, and intentions that are held by the speaker, and also the intentions held by the interpreter. Hence, understanding language and communication requires a three-way relationship between the speaker, the interpreter, and the shared external world (i.e. the particular situation or context that we find ourselves in as well as how that specific context relates to the overall world). 


Here is another concern for AGI being developed by 2043: I mentioned earlier that human clinicians may be better at handling rare or unusual cases (I am thinking of House (the television show) style cases here). The problem of theoretical underdetermination suggests that there can be multiple equally valid interpretations of a given set of data or evidence. Hence, we can question the potential difficulties in developing an AI system capable of selecting the most appropriate interpretation among competing possibilities, especially when sometimes the traditional answer is correct while other times the nontraditional answer is the correct one. 

There are no bids on this project.