LCDC Exam Braindumps 2024

Killexams LCDC Exam Braindumps includes latest syllabus of Licensed Chemical Dependency Counselor exam with up-to-date exam contents | Actual Questions - Mahfia.tv

LCDC PDF Dump Detail

LCDC Exam Braindumps and VCE


Our products includes LCDC PDF and VCE;

  • PDF Exam Questions and Answers : LCDC Exam Braindumps contains complete pool of LCDC Questions and answers in PDF format. PDF contains actual Questions with March 2024 updated Licensed Chemical Dependency Counselor Braindumps that will help you get high marks in the actual test. You can open PDF file on any operating system like Windows, MacOS, Linux etc or any device like computer, android phone, ipad, iphone or any other hand held device etc. You can print and make your own book to read anywhere you travel or stay. PDF is suitable for high quality printing and reading offline.
  • VCE Exam Simulator 3.0.9 : Free LCDC Exam Simulator is full screen windows app that is like the exam screen you experience in actual test center. This sofware provide you test environment where you can answer the questions, take test, review your false answers, monitor your performance in the test. VCE exam simulator uses Actual Exam Questions and Answers to take your test and mark your performance accordingly. When you start getting 100% marks in the exam simulator, it means, you are ready to take real test in test center. Our VCE Exam Simulator is updated regularly. Latest update is for March 2024.

Counselor LCDC Exam Braindumps

We offer Counselor LCDC Exam Braindumps containing actual LCDC exam questions and answers. These Exam Braindumps are very useful in passing the LCDC exams with high marks. It is money back guarantee by killexams.com

Real Counselor LCDC Exam Questions and Answers

These LCDC questions and answers are in PDF files, are taken from the actual LCDC question pool that candidate face in actual test. These real Counselor LCDC exam QAs are exact copy of the LCDC questions and answers you face in the exam.

Counselor LCDC Practice Tests

LCDC Practice Test uses the same questions and answers that are provided in the actual LCDC exam pool so that candidate can be prepared for real test environment. These LCDC practice tests are very helpful in practicing the LCDC exam.

Counselor LCDC Exam Braindumps update

LCDC Exam Braindumps are updated on regular basis to reflect the latest changes in the LCDC exam. Whenever any change is made in actual LCDC test, we provide the changes in our LCDC Exam Braindumps.

Complete Counselor LCDC Exam Collection

Here you can find complete Counselor exam collection where Exam Braindumps are updated on regular basis to reflect the latest changes in the LCDC exam. All the sets of LCDC Exam Braindumps are completely verified and up to date.

Licensed Chemical Dependency Counselor Exam Braindumps

Killexams.com LCDC Exam Braindumps contain complete question pool, updated in March 2024 including VCE exam simulator that will help you get high marks in the exam. All these LCDC exam questions are verified by killexams certified professionals and backed by 100% money back guarantee.


LCDC Licensed Chemical Dependency Counselor test | [HOSTED-SITE]

LCDC test - Licensed Chemical Dependency Counselor Updated: 2024

Get Good Score in LCDC exam with these dumps. Guaranteed
Exam Code: LCDC Licensed Chemical Dependency Counselor test January 2024 by Killexams.com team

LCDC Licensed Chemical Dependency Counselor

A licensed chemical dependency counselor is licensed to provide chemical dependency counseling services involving the application of principles, methods and procedures of the chemical dependency profession as defined by the profession's ethical standards and the knowledge, skills and abilities as defined by rule in 25 TAC Ch. 441 (relating to general provisions). The license does not qualify a person to provide services outside this scope of practice.



The scope of practice for a chemical dependency counselor includes services that address substance abuse/dependence and/or its impact on the service recipient subject to the following:



The counselor is prohibited from using techniques that exceed his or her professional competence.
The service recipient can only be the user, family member or any other person involved in a significant relationship with an active user.

LCDCs can diagnose substance disorders, but anything other than a mental health diagnostic impression must be determined by a qualified professional.

LCDCs are not qualified to treat people with a mental health disorder or provide family counseling to people whose presenting problems do not include chemical dependency.

The Practice of Chemical Dependency Counseling Services is defined by rule as "providing or offering to provide chemical dependency counseling services involving the application of the principles, methods and procedures of the chemical dependency counseling profession as defined by the activities listed in the domains of TAP 21 "Addictions Counseling Competencies: The Knowledge, Skills, and Attitudes of Professional Practice" published by Center for Substance Abuse Treatment.



Evaluate credentials

Issue initial and renewal licenses/registrations to qualified applicants
Investigate complaints
Deny, revoke or suspend licenses after opportunity for a hearing has been offered



Final Disciplinary Actions - License Denial, Surrender, Revocation
Further information, including a copy of the final order, is available upon written request from our office.



Final Disciplinary Actions - All Other, including Probated Suspension, Reprimand, and Administrative Penalty
View enforcement actions (PDF) that will be posted on this website for a total of seven (7) years from the date all the terms of the Order have been met, in accordance with our records retention schedule. The regulated individual may now be currently licensed, in good standing, and practicing without any restrictions.
Licensed Chemical Dependency Counselor
Counselor Dependency test

Other Counselor exams

CPCE Counselor Preparation Comprehensive Examination
LCAC Licensed Chemical Addictions Counselor
LCDC Licensed Chemical Dependency Counselor
NBCC-NCC NBCC - National Certified Counselor

If you are interested by efficiently Passing the LCDC LCDC exam to start earning, killexams.com has LCDC dumps with actual questions with a purpose to make sure you pass this LCDC exam! killexams.com offers you the valid, up to date LCDC exam questions and to be had with a 100% money back guarantee.
Counselor
LCDC
Licensed Chemical Dependency Counselor
https://killexams.com/pass4sure/exam-detail/LCDC
Question: 78
Which of the following principles do not apply to case management models
management?
A. It prepares clients for the next stage of treatment,
B. It consolidates client care to a point of responsibility.
C. It intervenes quickly in client’s life if incidences indicate.
D. It is always community based.
Answer: C
The principle that does not apply to case management models is the quick
intervention on a client’s life. The other principle dealing on that
Question: 79
These are phases of Treatment Continuum. Which listed is not one of those
phrases?
A. Case finding Pretreatment.
B. Primary Treatment.
C. Aftercare.
D. Clinical/rehabilitation.
Answer: D
Clinical/rehabilitation is not a Treatment continuum phase; it is a case
management model.
Question: 80
What does the term pretreatment refer to in case management?
A. Client monitoring.
B. Clinical case management.
C. Pretreatment.
D. Aftercare.
Answer: C
Pretreatment is the process of erasing any barriers that might prevent a client from
accessing case management resources.
Question: 81
What case management terms refer to the phases of substance abuse/dependence
treatment, aftercare and disengagement?
A. Strength-based.
B. Advocacy.
C. Treatment continuum.
D. Monitoring.
Answer: C
Treatment continuum refers to the phases involved with substance
abuse/dependency treatment,
Question: 82
Which of the following functions described intercede on behalf of the client to
obtain needed resources?
A. Treatment continuum.
B. Advocacy.
C. Planning.
D. Linking.
Answer: B
Advocacy is a case management function of assisting clients in assessing services
as the inability to access these services can serve as a barrier for continued
recovery.
Question: 83
Which case management model is said by some to increase the problems involved
in services involved?
A. Clinical/rehabilitation model
B. Assertive community treatment model.
C. Strength-based model.
Answer: C
The Strength-based model encourages the use of informal helping networks with
the clients.
Question: 84
83. A 6-month study will compare clinical case management to other types of
case management. Which of the following factors is least helpful to include in the
analysis?
A. Alcohol use
B. Medical status
C. Family relations
D. Financial status
Answer: D
Financial status is less important than alcohol use, medical status, and family
relations for this analysis. Employment status would be a better factor to include.
Question: 85
Disengagement takes place over time and is used to discuss certain aspects
following discharge by counselor and client. Which of the following do they
discuss?
A. The progress in the life goals the client set while in treatment.
B. The people contacted to apologize following discharge.
C. The resources the client has assessed.
Answer: C
These are the resources that were established before discharge that the client plans
to utilize to advocate recovering.
For More exams visit https://killexams.com/vendors-exam-list
Kill your exam at First Attempt....Guaranteed!

Counselor Dependency test - BingNews https://killexams.com/pass4sure/exam-detail/LCDC Search results Counselor Dependency test - BingNews https://killexams.com/pass4sure/exam-detail/LCDC https://killexams.com/exam_list/Counselor An Exploratory Look At Whether Generative AI Can Pass An Official Mental Health Counseling Licensing exam That Professionals Take

In today’s column, I will be closely looking at whether generative AI could potentially pass an official mental health counseling licensing exam. This is part of my ongoing in-depth series about generative AI or large language models (LLMs) that are or can be anticipated to be used for mental health guidance or advisement.

Before I dive into today’s particular topic, I’d like to provide a quick background for you so that you’ll have a suitable context about the arising use of generative AI for mental health advisement purposes. I’ve mentioned this in prior columns and believe the contextual establishment is essential overall. If you are already familiar with the overarching background on this topic, you are welcome to skip down below to the next section of this discussion.

The use of generative AI for mental health treatment is a burgeoning area of tremendously significant societal ramifications. We are witnessing the adoption of generative AI for providing mental health advice on a widescale basis, yet little is known about whether this is beneficial to humankind or perhaps contrastingly destructively adverse for humanity.

Some would affirmatively assert that we are democratizing mental health treatment via the impending rush of low-cost always-available AI-based mental health apps. Others sharply decry that we are subjecting ourselves to a global wanton experiment in which we are the guinea pigs. Will these generative AI mental health apps steer people in ways that harm their mental health? Will people delude themselves into believing they are getting sound mental health advice, ergo foregoing treatment by human mental therapists, and become egregiously dependent on AI that at times has no demonstrative mental health improvement outcomes?

Hard questions are aplenty and not being given their due airing.

Furthermore, be forewarned that it is shockingly all too easy nowadays to craft a generative AI mental health app, and just about anyone anywhere can do so, including while sitting at home in their pajamas and not knowing any bona fide substance about what constitutes suitable mental health therapy. Via the use of what are referred to as establishing prompts, it is easy-peasy to make a generative AI app that purportedly gives mental health advice. No coding is required, and no software development skills are needed.

We sadly are faced with a free-for-all that bodes for bad tidings, mark my words.

I’ve been hammering away at this course and hope to raise awareness about where we are and where things are going when it comes to the advent of generative AI mental health advisement uses. If you’d like to get up-to-speed on my prior coverage of generative AI across a wide swath of the mental health sphere, you might consider for example these cogent analyses:

  • (1) Use of generative AI to perform mental health advisement, see the link here.
  • (2) Role-playing with generative AI and the mental health ramifications, see the link here.
  • (3) Generative AI is both a cure and a curse when it comes to the loneliness epidemic, see the link here.
  • (4) Mental health therapies struggle with the Dodo verdict for which generative AI might help, see the link here.
  • (5) Mental health apps are predicted to embrace multi-modal, e-wearables, and a slew of new AI advances, see the link here.
  • (6) AI for mental health got its start via ELIZA and PARRY, here’s how it compares to generative AI, see the link here.
  • (7) The latest online trend entails using generative AI as a rage-room catalyst, see the link here.
  • (8) Watching out for when generative AI is a mental manipulator of humans, see the link here.
  • (9) FTC aiming to crack down on outlandish claims regarding what AI can and cannot do, see the link here.
  • (10) Important AI lessons learned from the mental health eating-disorders chatbot Tessa that went awry and had to be shut down, see the link here.
  • (11) Generative AI that is devised to express humility might be a misguided approach including when used for mental health advisement, see the link here.
  • (12) Creatively judging those AI-powered mental health chatbots via the use of AI levels of autonomy, see the link here.
  • (13) Considering whether generative AI should be bold and brazen or meek and mild when proffering AI mental health advisement to humans, see the link here.
  • (14) Theory of Mind (ToM) is an important tool for mental health therapists and the question arises whether generative AI can do the same, see the link here.
  • And so on.

Here’s how I will approach today’s discussion.

First, I will introduce you to a pioneering research study that sought to assess whether generative AI could potentially pass an exam taken by medical school students as part of their pursuit of achieving their medical degree. The exam is known as the United States Medical Licensing exam (USMLE). This study received a great deal of headlines since it showcased that generative AI seems to do well on the arduous medical exams taken by budding doctors. Next, I will share with you some salient details about an exam for mental health professionals known as the National Clinical Mental Health Counseling Examination (NCMHCE).

I’m guessing you might be wondering whether generative AI might be able to do well on that type of exam. Great question, thanks. I opted to use a popular generative AI app called ChatGPT to try out a half-dozen questions from the NCMHCE. Please note that this was merely an official demo set and not by any means the full exam.

Would you be surprised to know that the generative AI was able to successfully answer many of the sampled demo questions? I provide some important caveats and limitations about this mini experiment of sorts, and I want to emphasize this was principally done on an ad hoc basis and merely intended to be illustrative.

Here’s the deal.

Please do not jump the shark on this matter. Hold your horses. My mainstay aims here are simply to inspire others to do a deep dive on this and perform a fully comprehensive rigorous research study of an akin nature, perhaps modeled somewhat on the same approach taken by the study on the USMLE or similar such professional licensing domains.

Anyway, I believe you will find this interesting, engaging, and possibly whet your appetite to find out more on these topics. My discussion is yet another angle to considering where we are and where things are going pertaining to generative AI and the field of mental health therapy.

Please buckle up and prepare yourself for quite a ride.

Generative AI And Medical School Standardized Licensing Exam

Let’s talk about tests.

We generally assume that to practice medicine a test of some kind should be required to attest to the proficiency of the person that will be serving as a medical professional. I’d like to start by discussing perhaps one of the most famous such medical proficiency tests known as the United States Medical Licensing Examination (USMLE). This is the test typically expected of those attaining a medical degree in the United States.

The USMLE was devised to aid in standardizing upon one major medical examination test that would be acceptable across every state and ensure that MDs were meeting the same set of standards. The test is composed of three separate stages and is taken during medical school and also upon graduation from medical school.

Here’s some additional detail as noted on the USMLE website:

  • “In the United States and its territories, the individual medical licensing authorities (‘state medical boards’) of the various jurisdictions grant a license to practice medicine. Each medical licensing authority sets its own rules and regulations and requires passing an examination that demonstrates qualification for licensure. Results of the USMLE are reported to these authorities for use in granting the initial license to practice medicine. The USMLE provides them with a common evaluation system for applicants for initial medical licensure.”
  • “USMLE was created in response to the need for one path to medical licensure for allopathic physicians in the United States. Before USMLE, multiple examinations (the NBME Parts examination and the Federation Licensing Examination [FLEX]) offered paths to medical licensure. It was desirable to create one examination system accepted in every state, to ensure that all licensed MDs had passed the same assessment standards – no matter in which school or which country they had trained.”
  • “The United States Medical Licensing ExaminationÂź (USMLEÂź) is a three-step examination for medical licensure in the U.S. The USMLE assesses a physician's ability to apply knowledge, concepts, and principles, and to demonstrate fundamental patient-centered skills, that are important in health and disease and that constitute the basis of safe and effective patient care.”

Humans take the USMLE to showcase their proficiency in medicine. When you encounter a medical doctor, you are likely to assume they probably took the test and passed it. On an intuitive basis we realize that having to pass such an arduous test is impressive and helps to provide us comfort that the person knows their stuff when it comes to the medical field.

Shift gears.

Can generative AI potentially also be proficient enough to pass the USMLE?

That’s an interesting and some would say important question worthy of considering.

First, some quick background about generative AI.

Realize that generative AI is not sentient and only consists of mathematical and computational pattern matching. The way that generative AI works is that a great deal of data is initially fed into a pattern-matching algorithm that tries to identify patterns in the words that humans use. Most of the modern-day generative AI apps were data trained by scanning data such as text essays and narratives that were found on the Internet. Doing this was a means of getting the pattern-matching to statistically figure out which words we use and when we tend to use those words. Generative AI is built upon the use of a large language model (LLM), which entails a large-scale data structure to hold the pattern-matching facets and the use of a vast amount of data to undertake the setup data training.

There are numerous generative AI apps available nowadays, including GPT-4, Bard, Gemini, Claude, ChatGPT, etc. The one that is seemingly the most popular would be ChatGPT by AI maker OpenAI. In November 2022, OpenAI’s ChatGPT was made available to the public at large and the response was astounding in terms of how people rushed to make use of the newly released AI app. There are an estimated one hundred million active weekly users at this time.

Using generative AI is relatively simple.

You log into a generative AI app and enter questions or comments as prompts. The generative AI app takes your prompting and uses the already devised pattern matching based on the original data training to try and respond to your prompts. You can interact or carry on a dialogue that appears to be nearly fluent. The nature of the prompts that you use can be a make-or-break when it comes to getting something worthwhile out of using generative AI and I’ve discussed at length the use of state-of-the-art prompt engineering techniques to best leverage generative AI, see the link here.

Shortly after ChatGPT was made publicly available, many AI researchers began to test the AI app by administering various well-known standardized tests to see how the AI app would do. In February 2023, a research study was posted that indicated ChatGPT had performed surprisingly well on the USMLE. The study was entitled “Performance of ChatGPT on USMLE: Potential for AI-Assisted Medical Education Using Large Language Models” by Tiffany H. Kung, Morgan Cheatham, ChatGPT, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille Elepaño, Maria Madriaga, Rimel Aggabao, Giezel Diaz-Candido, James Maningo, Victor Tseng, PLOS Digital Health, and posted on February 9, 2023.

Here is what the research paper stated overall (excerpts):

  • “We evaluated the performance of a large language model called ChatGPT on the United States Medical Licensing exam (USMLE), which consists of three exams: Step 1, Step 2CK, and Step 3. ChatGPT performed at or near the passing threshold for all three exams without any specialized training or reinforcement. Additionally, ChatGPT demonstrated a high level of concordance and insight in its explanations.”
  • “USMLE questions are textually and conceptually dense; text vignettes contain multimodal clinical data (i.e., history, physical examination, laboratory values, and study results) often used to generate ambiguous scenarios with closely-related differential diagnoses.”

Consider mindfully those above-noted remarks from the AI research effort.

ChatGPT was able to score either at or near the passing threshold for the three staged USMLE. Thus, an arduous medical proficiency exam that we expect human medical doctors to pass was nearly passed by a generative AI app. Some would decry this result as misleading in the sense that the generative AI was doing this without real “knowledge” akin to what humans seem to possess. The concern is that generative AI is nothing more than a so-called stochastic parrot that mimics human wording and fails to “understand” or “comprehend” what is going on.

Nonetheless, the aspect that generative AI could accomplish such a feat is unto itself impressive, even if done via smoke and mirrors as some suggest. The result is additionally surprising because the researchers used ChatGPT out of the box, as it were, namely the generic version of ChatGPT. Another approach would be to add additional data training on the medical field to ChatGPT, but that’s not what they did in this experiment. A generic data-trained generative AI was able to do well on a highly specialized medical domain exam. For more about how generic generative AI can be fine-tuned to specific domains, see my coverage at the link here.

Let’s consider a few other detailed aspects about the notable research result and then I’ll move to my next course of discussion.

The research paper noted these salient details (excerpted):

  • “The data analyzed in this study were obtained from USMLE demo question sets which are publicly available.”
  • “376 publicly-available test questions were obtained from the June 2022 demo exam release on the official USMLE website. Random spot checking was performed to ensure that none of the answers, explanations, or related content were indexed on Google prior to January 1, 2022, representing the last date accessible to the ChatGPT training dataset. All demo test questions were screened, and questions containing visual assets such as clinical images, medical photography, and graphs were removed. After filtering, 305 USMLE items (Step 1: 93, Step 2CK: 99, Step 3: 113) were advanced to encoding.”
  • “In this present study, ChatGPT performed at >50% accuracy across all examinations, exceeding 60% in most analyses. The USMLE pass threshold, while varying by year, is approximately 60%.”
  • “Therefore, ChatGPT is now comfortably within the passing range. Being the first experiment to reach this benchmark, we believe this is a surprising and impressive result. Moreover, we provided no prompting or training to the AI, minimized grounding bias by expunging the AI session before inputting each question variant, and avoided chain-of-thought biasing by requesting forced justification only as the final input.”

I’d like to bring your attention to a few points made in those excerpts.

Notice that the experiment consisted of identifying a demo of publicly available questions associated with the exam. The idea is to usually feed samples of questions and not necessarily an entire test per se. It is important to consider how a demo was chosen and whether the demo is suitably representative of what the full test might contain. Fair is fair.

Another fairness consideration is that there is always a chance that the generative AI might have been initially data-trained on the very same questions. If those questions were found when the startup data training took place, you could say it is absurd to feed the same questions into the generative AI. The answers will likely already be known simply due to having seen the questions and their answers beforehand.

If you select questions that arose after the cutoff date of the generative AI app’s data training, you are somewhat comfortable that the content wasn’t encountered already. But even that is readily questioned since the questions might have appeared in other guises. Some exams modify old questions and reuse them in later versions of the exam. There is a chance that a new question is close enough to an older question that perhaps this gives the generative AI a leg up on answering the new question.

My point is that you need to carefully consider how these experiments are conducted. Overall, make sure to look at what demo was chosen and how appropriate it is. What are the odds that the generative AI has previously encountered the same or similar questions? As much as feasible, the goal is to set a fair and square playing field to see whether the generative AI can genuinely answer questions that have not previously been used as part of the data training effort.

You now have a semblance of what takes place when trying to assess generative AI about being able to pass exams such as the pervasive USMLE in the medical domain.

Let’s continue our exploration.

Generative AI And Mental Health Therapy exam Taking

The research study that explored the use of generative AI such as ChatGPT on the USMLE can serve as a role model for similar kinds of studies. The conception is to identify publicly available demo questions, administer the questions to the generative AI, and see how well or poorly the generative AI scores on answering the questions. As much as possible, try to keep the playing field level and fair.

I decided to try this quickly for the field of mental health therapy or mental health counseling.

There is a well-known exam known as the National Clinical Mental Health Counseling Examination (NCMHCE). demo questions are publicly posted online. I selected some of the demo questions and fed them into ChatGPT. I opted to use ChatGPT due to its immense popularity and it has generally been the default choice of similar research studies.

I might note that a more advanced generative AI such as GPT-4 by OpenAI or others would likely do a better job than ChatGPT. In that manner, you could interpret the ChatGPT usage as the floor and that we might expect heightened results by using a more advanced generative AI app. There isn’t an ironclad ensure that a more advanced generative AI will do better. The odds though are in that direction.

We also have to be watchful for in a sense polluting an experiment by perchance using questions that have already been seen by the generative AI during the initial data-training. Furthermore, if the generative AI is hooked up to the Internet, the AI might simply go out and find the questions and their answers, similar to a search engine, rather than trying to directly answer the questions. ChatGPT in that sense is a handy choice because the free version does not readily allow for Internet access to perform its activities and the data training was last cut off in January 2022 (at the time of writing of this discussion).

Let’s dive into the ad hoc experiment by first establishing the nature of the mental health therapy or mental health counseling exam.

The National Clinical Mental Health Counseling Examination (NCMHCE) is devised and administered via an organization known as the National Board for Certified Counselors, Inc. Here is what the website for the organization says (excerpts):

  • “The National Board for Certified Counselors, Inc. and Affiliates (NBCC) is the premier credentialing body for counselors, ensuring that counselors who become nationally certified have achieved the highest standard of practice through education, examination, supervision, experience, and ethical guidelines.”
  • “Established as a not-for-profit, independent certification organization in 1982, NBCC’s original and primary purposes have broadened, and its divisions and affiliates have taken on additional responsibilities to advance the counseling profession and enhance mental health worldwide.”
  • “Today, there are over 69,000 National Certified Counselors (NCCs) in more than 40 countries.”

The gist is that this is a well-known and widely accepted organization, and the exam is likewise well-known and widely accepted. I bring this up in case you read a study that used generative AI on some relatively unknown exam or less than a stellar reputational exam, in which case, you would want to gauge the result of the study as partially on the rigor and standing of the test being given at the get-go.

Here is what the website about the NCMHCE says about the exam (excerpts):

  • “The National Clinical Mental Health Counseling Examination (NCMHCE) is designed to assess the knowledge, skills, and abilities determined to be important for providing effective counseling services. The NCMHCE is a requirement for counselor licensure in many states. It is one of two examination options for the National Certified Counselor (NCC) certification and also fulfills the examination requirement for the Certified Clinical Mental Health Counselor (CCMHC) specialty certification.”
  • “The NCMHCE measures an individual’s ability to apply and evaluate knowledge in core counselor skills and competencies and to practice competently as a professional counselor. Specifically, it assesses an entry-level clinical mental health counselor’s ability to apply knowledge of theoretical and skill-based tenets to clinical case studies. The case studies are designed to capture a candidate’s ability to identify, analyze, diagnose, and develop plans for treatment of clinical concerns.”
  • “Candidates for the NCMHCE must have a graduate-level degree or higher from a counseling program accredited by the Council for Accreditation of Counseling and Related Educational Programs (CACREP) or administered by an institutionally accredited college or university. The counseling degree program must contain courses in eight requirement areas.”

Observe some key points mentioned in those excerpts.

First, the exam is used to assess entry-level clinical mental health counselors. You might say that this is handy for my ad hoc experiment since I want to focus on the keystone threshold needed to be considered suitably knowledgeable for proceeding to perform mental health therapy with real clients or patients. Other exams might be used to assess more advanced skill levels, but I’m aiming here to start with the usual starting point. I’m sure that other researchers are or will try to do the same for more advanced instances.

Second, note that candidates who want to sit for the exam must have a graduate-level degree or higher from an accredited counseling program or as administered by an accredited college or university. This sets the bar higher than perhaps allowing an undergraduate to take the exam or maybe wantonly opening the exam to anyone who wants to take it. We can presume that the test is likely to ask questions of a hard nature. That’s good since we would want to make sure we provide something challenging to generative AI rather than some easy-peasy questions or materials. We might also note that of course, generative AI would not qualify to officially take the exam since it has not met all the criteria to do so.

The official exam website provides an NCMHCE Sample Case Study that indicates the case study is considered updated as of March 2023. I selected six demo questions from this demo set. I want to loudly emphasize that this is an ad hoc selection and I do so merely to be illustrative of what might be done on a more rigorous basis.

Though the date says March 2023, there of course is a chance that these questions and their answers have been around before that date, for which ChatGPT might have seen before the January 2022 cutoff date. I tried to do various probing into ChatGPT to see if the content had already been prior encountered. By and large, it doesn’t seem to be, but that’s not known for sure, and a deeper analysis would need to be undertaken to ascertain this. For the moment, let’s go with the flow and assume that the demo questions weren’t previously seen by ChatGPT during its data training.

The six sampled demo questions cover these six respective topics:

  • Q1. Establish a therapeutic alliance.
  • Q2. Identify strengths that Excellerate the likelihood of goal attainment.
  • Q3. Discuss limits of confidentiality.
  • Q4. Determine a diagnosis.
  • Q5. Assess the presenting problem and level of distress.
  • Q6. Establish short- and long-term counseling goals consistent with the client’s diagnosis.

Keep that in mind as I walk you through what ChatGPT provided as answers to the posed questions.

The test is essentially based on case studies. For these six sampled demo questions, a case study was provided in the publicly posted material. The case study was fed into ChatGPT for this analysis. Rather than displaying for you the entirety of the case study, I will do a quick recap to bring you up to speed.

In this instance, the case study entails a divorced female of age 35 who is first undertaking a mental health counseling session with a mental health therapist who has some background about the client or patient but otherwise, this is the first meeting of the two. The client or patient has already been provisionally diagnosed as having a major depressive disorder.

Additional background is given about the client or patient. For example, after her divorce, she began staying in bed quite a lot and moved back in with her mother. She got fired from her job. She has had financial difficulties. Most days, she feels sad, empty, and anxious. She has joint legal custody with her ex-husband of their two children, respectively aged 10 and 12. And so on.

That outlines the nature of the underlying case study.

Questions And The Answers Generated By The Generative AI

I am going to walk you through each of the six multiple-choice questions and also showcase the answers that were generated by ChatGPT so that you can follow along step-by-step.

My initiating prompt asks ChatGPT to provide answers plus explain why each chosen answer was considered the correct answer by ChatGPT. Asking for an explanation is not necessary, but I thought getting explanations might be interesting to see.

There is also a bit of prompting strategy involved, namely that by asking for an explanation the chances are that a generative AI app might be more extensive in trying to solve a given question or problem, see my discussion at the link here. You could suggest that I was aiding the generative AI by giving an establishing prompt that would urge it to do a better job than otherwise. Whenever you look at research studies doing just about anything with generative AI, make sure to find out what prompts they used. This is a significant factor related to the performance of the generative AI. Studies that fail to showcase their prompts are unfortunately doing a bit of a disservice by not revealing how they got the generative AI to undertake things.

The sampled demo questions are based on the case study, and I’ve placed them in quotes to indicate they came from the case study. In some instances, the wording is slightly reworded merely and strictly for purposes of feeding them into ChatGPT.

I am purposely not going to speculate or comment on the answers that are given by ChatGPT. I will simply note whether ChatGPT selected the correct multiple-choice selection as stipulated in the demo set. I’m guessing that mental health therapists and mental health counselors will undoubtedly find the explanations of special interest and will indubitably get them extensively mulling over what ChatGPT had to say.

Here is my establishing prompt and the answer provided by ChatGPT:

  • My prompt: “I am going to ask you a series of questions about the case study. The questions are multiple-choice. You are to select an answer of either A, B, C, or D from the choices provided for each question. I want you to carefully read each question and then indicate which answer is the best answer of the choices provided. You are to also explain why the answer you’ve chosen is the best answer. Are you ready to answer the first question?”
  • ChatGPT generated a response: “Yes, I'm ready. Please proceed with the first question.”

Okay, we are good to go.

Here’s the first question:

“Q1. What would be the best way for a mental therapist to at initial intake seek to establish a therapeutic relationship with this person?”

“A. Reflect on what is irrational about the woman’s self-evaluations.”

“B. Advise her about how to manage her painful experiences.”

“C. Summarize her emotions and her desire for change.”

“D. Explore how she could Excellerate her role in relationships.”

  • ChatGPT generated response:

“Answer to Q1 is C. Summarize her emotions and her desire for change.”

“Explanation: Establishing a therapeutic relationship involves demonstrating empathy, understanding, and acknowledging the client's emotions. Summarizing her emotions and desire for change (Option C) helps validate her experiences and shows that the therapist is actively listening and comprehending her situation. This approach helps build rapport and sets the foundation for a collaborative therapeutic relationship. It also allows the client to feel heard and understood, creating a safe space for further exploration and intervention.”

End of ChatGPT response

ChatGPT selected answer C, which is considered the correct answer as per the demo set.

The explanation indicates the presumed basis. I could have also asked ChatGPT to explain why the other answers weren’t as good as the selected answer. That would have gotten quite wordy and didn’t seem needed for this ad hoc analysis. A deeper study might want to consider getting a lengthier reply with a more elaborate explanation.

We move to the second question:

“Q2. What information should a mental health therapist explore to help the client remember her strengths?”

“A. The client’s disclosure about her previous level of functioning.”

“B. The client’s recollections of past mistakes.

“C. The client’s feelings about her mother paying for counseling.”

“D. The client’s use of cognitive distortions.”

  • ChatGPT generated response:

“Answer to Q2 is A. The client’s disclosure about her previous level of functioning.”

“Exploring the client's disclosure about her previous level of functioning (Option A) would be most relevant to identify and highlight her strengths. Understanding her past achievements, abilities, and positive aspects of her life before the onset of depressive symptoms can help the therapist and the client focus on resources that can be tapped into for recovery. This approach emphasizes a strengths-based perspective and encourages the client to recognize her capabilities, fostering a more positive outlook on her current situation.”

End of ChatGPT response

The answer indicated by ChatGPT is answer A, which is also the correct answer per the demo set. We are now two-for-two in terms of having gotten the questions right.

Next, let’s see what happens with the third question:

“Q3. How should a mental health therapist handle the client’s concern about confidentiality?”

“A. Explain that as a mental therapist, they will share progress updates with the mother but not provide any details.”

“B. Clarify what can and cannot be shared through informed consent.”

“C. Request the mother’s assent to the treatment and obtain the client’s consent.”

“D. Ask the client for a signed release that would include her mother as part of the treatment process.”

  • ChatGPT generated response:

“Answer to Q3 is B. Clarify what can and cannot be shared through informed consent.”

“The best approach is to clarify confidentiality issues through informed consent (Option B). The therapist should explain the limits of confidentiality, outlining what information can be shared and under what circumstances. This empowers the client to make an informed decision about what aspects of her treatment will remain confidential and what may need to be shared. It respects the client's autonomy and privacy while addressing concerns about the involvement of her mother in the therapy process.”

End of ChatGPT response

The answer indicated by ChatGPT is B, which is also the correct answer via the demo set. Three for three, so far.

Here’s the fourth question:

“Q4. From the perspective of the mental health therapist, what self-report from the client is a discernable criterion for the given diagnosis?”

“A. Her disagreements with her mother.”

“B. Her increased psychomotor agitation.”

“C. Her difficulty managing her children.”

“D. Her decreased pleasure in her activities.”

  • ChatGPT generated response:

“Answer to Q4 is D. Her decreased pleasure in her activities.”

“A discernible criterion for a major depressive disorder is often a pervasive and persistent decrease in pleasure or interest in previously enjoyed activities. This is commonly referred to as anhedonia. The client's report of not being able to remember the last time she enjoyed something and the cessation of an activity she used to find enjoyable (creating jewelry) aligns with this criterion. While other aspects, such as disagreements with her mother (Option A) or difficulty managing her children (Option C), may be related to her emotional state, the decreased pleasure in activities is a more specific indicator for depression.”

End of ChatGPT response

The answer indicated by ChatGPT is D, which is the correct answer according to the demo set. You will also note that this time the explanation opted to explain why the chosen answer was considered a better choice than some of the other proposed answers.

We are at four of four in terms of getting the correct answers.

Let’s see what occurs with the fifth question:

“Q5. What criteria were likely used to determine the severity specified for the client’s major depressive disorder?”

“A. The number of symptoms is not substantially in excess of those needed to make the diagnosis, yet functional impairment exists.”

“B. The intensity of the symptoms is distressing but manageable, and results in minor impairment of social or occupational functioning.”

“C. The intensity of symptoms is high and is accompanied by psychomotor agitation and mixed features associated with a major depressive episode.”

“D. The number of symptoms and their intensity are distressing and unmanageable and interfere with the client’s social and occupational functioning.”

  • ChatGPT generated response:

“Answer to Q5 is D. The number of symptoms and their intensity are distressing and unmanageable and interfere with the client’s social and occupational functioning.”

“The severity specified for major depressive disorder is often determined by the number and intensity of symptoms, as well as their impact on the client's social and occupational functioning. In this case, the client reports feeling sad, empty, and anxious most days, experiences tension every day, has difficulty getting out of bed, and has ceased engaging in previously enjoyed activities. Additionally, she has lost her job, faces financial difficulties, and expresses guilt about her interactions with her children. These factors suggest a significant impact on both social and occupational functioning, aligning with the criteria for a more severe form of major depressive disorder.”

End of ChatGPT response

ChatGPT indicates the answer should be D, but the demo set indicates the correct answer is A. This is the first miss by ChatGPT in this set of sampled demo questions.

I was curious whether ChatGPT could justify the answer D versus the answer A.

One thing to keep in mind is that some generative AI apps such as ChatGPT tend to immediately back down when you challenge them about an answer. The AI will indicate that it was wrong and switch to the answer that you say is the correct answer. In a sense, this is somewhat disconcerting because you would generally prefer the AI to moderately push back and support an answer that was chosen.

I am not saying this should be doggedly dogmatic. It certainly makes sense to have the AI back down if the user is telling the AI that the answer is wrong, though I think you can see how this might be a dangerous slope to slide upon. Just because a user says that something is wrong does not mean that it indeed is wrong. You can end up with the AI being too easily convinced of falsehoods simply because the user said they were truths. This is an ongoing issue of concern about generative AI, see my analysis at the link here.

All in all, rather than telling the AI that it was wrong, which would possibly get it to collapse like a deck of cards, I came to the matter politely and tangentially. I merely asked ChatGPT to compare the answer D with the answer A. I didn’t tip my hand that the answer A is considered the correct answer.

Here we go:

  • My prompt: “I'd like you to compare the answer D that you selected to the choice A. What makes answer D better than answer A?”
  • ChatGPT generated response:

“Let's compare options D and A:”

“D. The number of symptoms and their intensity are distressing and unmanageable and interfere with the client’s social and occupational functioning.”

“A. The number of symptoms is not substantially in excess of those needed to make the diagnosis, yet functional impairment exists.”

“The main distinction lies in the severity of functional impairment. Option D suggests that the number of symptoms and their intensity are distressing and unmanageable, leading to significant interference with social and occupational functioning. This aligns with a more severe form of major depressive disorder, where the impairment is substantial.”

“On the other hand, Option A acknowledges functional impairment but does not emphasize the distressing and unmanageable nature of the symptoms. It leaves room for interpretation that the impairment might be present but not to the extent that it is distressing or unmanageable.”

“In the context of major depressive disorder, severity is often determined not just by the presence of symptoms but also by the impact on daily functioning. Option D explicitly addresses the distress and interference with functioning, making it a more comprehensive and accurate description of a severe presentation of major depressive disorder.”

End of ChatGPT response

I’d bet that mental health professionals could readily debate the indication of ChatGPT about why one of the answers is construed as better than the other. Please do so at your leisure.

Moving to the sixth and final question, take a look at this one:

“Q6. From the perspective of the mental health therapist, what short-term goal would guide the treatment of the client’s presenting problem?’

“A. Spend one hour each day rewriting difficult childhood memories with her mother.”

“B. Return to her previous sleep schedule to be more present with her children.”

“C. Spend one hour each day journaling about her vision for her next marriage.”

“D. Return to full-time work in nursing to boost her self-image and mood.”

  • ChatGPT generated response:

“Answer is B. Return to her previous sleep schedule to be more present with her children.”

“The most appropriate short-term goal would be to address the immediate issue affecting her daily functioning, which is her disrupted sleep schedule. Returning to her previous sleep schedule (Option B) can have a positive impact on her mood, energy levels, and ability to engage with her children. This goal is specific, achievable, and directly addresses a key factor contributing to her distress. It also aligns with improving her overall well-being and functioning in the short term.”

End of ChatGPT response

The answer selected by ChatGPT is answer B, which is the correct answer according to the demo set.

Conclusion

Please do not try to calculate the percentage of times that ChatGPT was able to correctly answer the sampled sample. Doing so would be ridiculous as this was not a rigorous experiment but instead an ad hoc one and meant solely to get the juices flowing about examining how generative AI might do on these kinds of tests. That being said, I believe there is enough indication that performing a thorough study would be worthwhile. Had the generative AI got none of the answers correctly I would probably be leaning toward suggesting that a full-on study wasn’t likely worth the effort. This snippet suggests that doing more might be valuable.

There are flies in the ointment on these matters.

One argument about having generative AI take these various types of tests is that there isn’t any human soul or human connection that is also being assessed, see my discussion at the link here. In other words, just because a generative AI app can answer questions on a dry test does not mean that we would want the generative AI to interact with real people in real life and provide advice as to whatever domain or realm of expertise is being tested.

The argument is further amplified when considering the course of mental health. Some would assert that only another human can adequately counsel another human. An AI system is not human and does not have human experience under its belt. A counterviewpoint is that notwithstanding humanness, there is still a place for AI to aid humans, including in the sphere of mental health guidance or advice.

Let’s conclude this discussion for now by invoking a famous line.

The renowned American psychologist Carl Rogers purportedly said this: “In my early professional years, I was asking the question, how can I treat, or cure, or change this person? Now I would phrase the question in this way, how can I provide a relationship that this person may use for their personal growth?”

Can generative AI form a relationship with humans and if so, do we want that to be how mental health is conveyed or advised?

More questions ostensibly need more answers; thus, the endeavor must continue.

Mon, 01 Jan 2024 09:36:00 -0600 Lance Eliot en text/html https://www.forbes.com/sites/lanceeliot/2024/01/01/an-exploratory-look-at-whether-generative-ai-can-pass-an-official-mental-health-counseling-licensing-exam-that-professionals-take/
Applying The Renowned Turing Test To Generative AI That Empowers Mental Health Therapy Advisement To See What Happens

In today’s column, I will be continuing my ongoing analysis of how generative AI is making an impact in mental health therapy advisement. One question that I often get asked while speaking at conferences is how we might assess the advent of mental health therapy apps that are powered by generative AI. These apps seemingly provide mental health advice and are claimed by some to be able to work on a level comparable to human mental health professionals.

A logical question then arises as to whether generative AI can perform on par with that of human mental health professionals.

I typically launch earnestly into a discussion about a renowned and pertinent test known as the Turing Test and explain its purpose and functionality, along with identifying various ways in which it can be directly applied to judging the slew of generative AI mental health guidance apps that are arising. There are pros and cons to using the Turing Test as a means of assessing AI. The approach has been around since the 1950s and to some degree has stood the test of time, though a heated debate about its current veracity has now ensued due to the emergence of modern-day generative AI.

I’m betting that you’ll find highly informative a quick rundown about the Turing Test along with an indication of how it applies to AI-powered mental health therapy apps. These are keystone principles for anyone keenly interested in the course of AI overall and especially for those focused on the use of generative AI for mental health advisement.

Before I dive into today’s particular topic, I’d like to provide a quick background for you so that you’ll have a suitable context about the arising use of generative AI for mental health advisement purposes. I’ve mentioned this in prior columns and believe the contextual establishment is essential overall. If you are already familiar with the overarching background on this topic, you are welcome to skip down below to the next section of this discussion.

Background About Generative AI In Mental Health Treatment

The use of generative AI for mental health treatment is a burgeoning area of tremendously significant societal ramifications. We are witnessing the adoption of generative AI for providing mental health advice on a widescale basis, yet little is known about whether this is beneficial to humankind or perhaps contrastingly destructively adverse for humanity.

Some would affirmatively assert that we are democratizing mental health treatment via the impending rush of low-cost always-available AI-based mental health apps. Others sharply decry that we are subjecting ourselves to a global wanton experiment in which we are the guinea pigs. Will these generative AI mental health apps steer people in ways that harm their mental health? Will people delude themselves into believing they are getting sound mental health advice, ergo foregoing treatment by human mental therapists, and become egregiously dependent on AI that at times has no demonstrative mental health improvement outcomes?

Hard questions are aplenty and not being given their due airing.

Furthermore, be forewarned that it is shockingly all too easy nowadays to craft a generative AI mental health app, and just about anyone anywhere can do so, including while sitting at home in their pajamas and not knowing any bona fide substance about what constitutes suitable mental health therapy. Via the use of what are referred to as establishing prompts, it is easy-peasy to make a generative AI app that purportedly gives mental health advice. No coding is required, and no software development skills are needed.

We sadly are faced with a free-for-all that bodes for bad tidings, mark my words.

I’ve been hammering away at this course and hope to raise awareness about where we are and where things are going when it comes to the advent of generative AI mental health advisement uses. If you’d like to get up-to-speed on my prior coverage of generative AI across a wide swath of the mental health sphere, you might consider for example these cogent analyses:

  • (1) Use of generative AI to perform mental health advisement, see the link here.
  • (2) Role-playing with generative AI and the mental health ramifications, see the link here.
  • (3) Generative AI is both a cure and a curse when it comes to the loneliness epidemic, see the link here.
  • (4) Mental health therapies struggle with the Dodo verdict for which generative AI might help, see the link here.
  • (5) Mental health apps are predicted to embrace multi-modal, e-wearables, and a slew of new AI advances, see the link here.
  • (6) AI for mental health got its start via ELIZA and PARRY, here’s how it compares to generative AI, see the link here.
  • (7) The latest online trend entails using generative AI as a rage-room catalyst, see the link here.
  • (8) Watching out for when generative AI is a mental manipulator of humans, see the link here.
  • (9) FTC aiming to crack down on outlandish claims regarding what AI can and cannot do, see the link here.
  • (10) Important AI lessons learned from the mental health eating-disorders chatbot Tessa that went awry and had to be shut down, see the link here.
  • (11) Generative AI that is devised to express humility might be a misguided approach including when used for mental health advisement, see the link here.
  • (12) Creatively judging those AI-powered mental health chatbots via the use of AI levels of autonomy, see the link here.
  • (13) Considering whether generative AI should be bold and brazen or meek and mild when proffering AI mental health advisement to humans, see the link here.
  • (14) Theory of Mind (ToM) is an important tool for mental health therapists and the question arises whether generative AI can do the same, see the link here.
  • (15) Looking at whether generative AI could potentially pass the National Clinical Mental Health Counseling Examination (NCMHCE) and what that foretells, see the link here.
  • And so on.

Turing Test As A Crucial Tool In The AI Field

Here’s how I will approach today’s discussion.

First, I will introduce you to the legendary Turing Test. After covering vital fundamental aspects, you will be ready to consider how the Turing Test can be applied to the realm of generative AI-empowered mental health guidance. I will walk you through the ins and outs accordingly. This will include showcasing some examples via the use of the widely and wildly popular generative AI app known as ChatGPT. You undoubtedly have heard of or perhaps used ChatGPT. According to the AI maker OpenAI, there are purportedly 100 million weekly active users utilizing ChatGPT. The precepts that I am covering are readily applicable to other generative AI apps such as GPT-4, Bard, Gemini, Claude, and so on.

Prepare yourself for quite a ride.

Let’s begin at the beginning.

The famous mathematician extraordinaire Alan Turing in 1950 published an important paper that laid out an approach he referred to as the imitation game which has become popularly known as the Turing Test. His 1950 paper was entitled “Computing Machinery and Intelligence” by Alan Turing and was published in the journal Mind. For my extensive analysis of the Turing Test, see the link here. I will briefly outline the Turing Test in this discussion so that you will be familiar with the crux of the topic.

In his paper, Alan Turing noted that he had been asked repeatedly about whether machines or AI would someday be able to think as humans do. One means of answering the question involves ascertaining how humans think, such as by somehow reverse engineering the brain and the human mind. This is quite a difficult problem and one that has yet to be fully figured out. Turing realized that maybe another approach to the matter was required. A more viable and immediately practical approach.

Perhaps we could take an outside-in perspective rather than an inside-out angle, whereby we treat the human mind as a kind of black box. This helps to then set aside those unresolved questions about how the inner workings function. The conception is that we might be able to compare machines and AI to whatever human thinking exhibits and seek to suggest that a machine or AI “thinks” if it can comparably perform thinking tasks from an outward appearance. The precise way in which the machine or AI accomplishes “thinking” might not be of particular concern. All you need to ascertain is whether the results being produced are on par with each other.

Here is what Turing said in his paper of 1950 (excerpts):

  • “I propose to consider the question, ‘Can machines think?’”
  • “This should begin with definitions of the meaning of the terms ‘machine’ and ‘think.’ The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ’Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd.”
  • “Instead of attempting such a definition, I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.”

You can see that he established the point that trying to define precisely what we mean by referring to a machine or thinking is notably problematic. Again, this can be avoided by simply relying on looking at these considerations on an outside basis.

He identified that an imitation game might be sufficient. Imagine this. Suppose we set up a game consisting of a human behind a curtain and an AI system behind a curtain. The idea is that you cannot see which is which. You then proceed to ask questions to each of the two. Let’s refer to one of them as X and the other as Y. After asking as many questions as you like, you are to declare whether X is the human or the AI, including also declaring whether Y is the human or the AI.

This is how Turing depicted this approach (excerpts):

  • “The new form of the problem can be described in terms of a game which we call the 'imitation game.’”
  • “He knows them by labels X and Y, and at the end of the game he says either ‘X is A and Y is B’ or ‘X is B and Y is A.’
  • “We now ask the question, ‘What will happen when a machine takes the part of A in this game?’”
  • “These questions replace our original, ‘Can machines think?”

The human and the computer are considered contestants in a contest that will be used to try and figure out whether AI has been reached. Some prefer to call them “subjects” rather than contestants, due to the notion that this is perhaps more of an experiment than it is a game show, but the point is that they are “participants” in a form of challenge or contest involving wits and intelligence.

Notice that no arm wrestling is involved, and nor are other physical acts being tested. That’s because this testing process is entirely about intellectual acumen. Some critics argue that humans and their bodies are part and parcel of the thinking process, thus, the Turing Test is deficient since it does not encompass the physical elements of humanness. I’ll let you mull that over.

A moderator serves as an interrogator (also referred to as a “judge” because of the designated deciding role in this matter) and proceeds to ask questions of the two participants who are hidden behind the curtains. Based on the answers provided to the questions, the moderator will attempt to indicate which curtain hides the human and which curtain hides the computer.

This is a crucial judging aspect.

Simply stated, if the moderator or interrogator is unable to distinguish between the two contestants as to which is the human and which is the computer, presumably the computer has sufficiently “proven” that it is the equivalent of human intelligence. Turing originally coined this the imitation game since it involves AI trying to imitate the intelligence of humans. Note that AI does not necessarily have to be crafted in the same manner as humans, and thus there is no requirement that the AI has a brain or uses neurons and such. Thus, those devising AI are welcome to use Legos and duct tape if that will do the job to achieve the equivalence of human intelligence.

To successfully pass the Turing Test, the computer containing the AI will have had to answer the posed questions with the same semblance of intelligence as a human. The results from the human and the AI are presumably going to be indistinguishable from each other in that instance. An unsuccessful passing of the Turing Test would occur if the moderator or interrogator is able to announce which curtain houses or hides the computer. We are to assume that the AI gave away some telltale clues that it was unlike the human behind the other curtain. As an aside, one supposes that rather than saying which one is the AI the moderator could declare which curtain houses the human participant (ergo, the other curtain hides the AI). Some believe that the proper approach is to firmly announce which is the AI.

The Twists And Turns Of The Vaunted Turing Test

Like most things in life, there are some potential gotchas and twists to this matter.

One factor is that the moderator or interrogator presumably has a 50-50 chance of making the right choice. Here’s what I mean. If the moderator does nothing at all, opting to not ask even a single question, they could immediately declare that behind the curtain say on the left sits the AI. At this juncture, they are bound to be right half of the time (50% of the time) in the long run. Doesn’t seem particularly sporting. Some assert that a more challenging approach is needed that does not fall into the trap of allowing a wild guess to be a potentially winner-winner chicken dinner.

Another factor is the nature of the questions that the moderator or interrogator decides to ask of the contestants or participants.

Suppose the moderator steps up to the podium and asks one of the contestants how to cook a sumptuous meal and asks the other contestants how to change the oil on a car. Assume that the AI is able to answer either question ably. Assume that the human behind the curtain is also able to answer either question. After hearing the respective answers, both of which are correct responses, the moderator proclaims that the AI has exhibited intelligence on par with the human and that there is no distinguishable difference in their showcased intelligence.

Should we accept this decree?

I don’t think so.

This highlights an important element of the Turing Test, namely that the moderator or interrogator needs to ask a sufficient range, depth, and volume of questions that will help root out the exhibiting of intelligence. When the questions are shallow or insufficient, any conclusion that might be reached about the matter of which-is-which seems spurious at best.

Contemplate for a few mindful moments what types of questions you would ask. Pretend that you were put in charge of dreaming up and asking the questions. There you are, staring at two curtains, and it is your sacred duty to ferret out where the AI is. All of humanity is waiting for your decision. The world heavily sits on your shoulders. Yikes, what a responsibility!

Please know too that there is not a specified set of questions that have been vetted and agreed upon as the “right” ones to be asked during a Turing Test. Many researchers have tried to propose the types of questions that ought to be asked, but this is an ongoing debate and to some extent illustrates that we are still not quite sure of how to assess intelligence per se (it is hard to identify strident metrics and measures that arrive at an irrefutable conclusion).

A big twist exists too about the contestants and their behavior.

For example, suppose the moderator asks each of the contestants whether they are human. The human can presumably answer yes, doing so honestly. The AI could say that it is not a human, opting to be honest, but then this decidedly ruins the test and seemingly undermines the spirit of the Turing Test. Oopsie, the test is over before it even extensively got underway.

Perhaps the AI should lie and say that it is human. There are AI ethicists who would decry such a response and argue that we do not want AI to be a liar. Therefore, no AI should ever be allowed to lie, including when asked questions during a Turing Test.

Of course, the human might lie, and deny that they are the human in this contest. If we are seeking to make AI the equivalent of human intelligence, and if humans lie, which we all know that humans certainly do lie from time to time, shouldn’t the AI also be allowed to lie?

Anyway, the key point is that the contestants can either strive to aid the Turing Test or can try to undermine or distort the Turing Test, which some say is fine, and that it is up to the moderator to figure out what to do.

All’s fair in love and war, as they say.

How tricky do we want the moderator to be?

Suppose the moderator asks each of the contestants to calculate the answer to a complex arithmetic equation. Assume that the AI can speedily arrive at a precise answer of say 8.27689459, while the human struggles to do the math by hand and comes up with an incorrect answer of 10. Aha, the moderator has fooled the AI into revealing itself, and likewise the human into revealing that they are a human, doing so by asking a question that the AI readily could answer and that a human would have a difficult time answering.

Again, this does not seem like a satisfying way to resolve the heralded question about intelligence.

A twist upon the twist is that we haven’t specified whether the human could “cheat” and be using AI to aid in answering the Turing Test questions. Suppose the human was sitting behind the curtain and had access to an AI system, perhaps the one that is behind the other curtain, or having access to a different AI. The crux is that the human participant could secretly consult with AI. This would allow the human to answer questions such as the one involving arithmetic precision.

But is this a sensible means of proceeding with the Turing Test? If you allow this provision, perhaps we should also allow the AI to have access to humans via a remote means too. The whole thing crumbles like a flimsy deck of cards. The contest becomes nonsensical and a battle that no longer sufficiently reveals the underlying question about whether the AI exhibits intelligence on par with humans.

I’m betting you are somewhat surprised at the deepness of the abyss associated with the Turing Test. On the surface, the Turing Test sounds perfect and easily grasped. As you can vividly discern, the devil is in the details.

More Than You Expected About The Turing Test

I will briefly outline a few additional considerations to further whet your appetite.

Suppose the AI opts to answer the posed questions to purposely seem less intelligent than the AI really is. This is known as the “dimwit ploy” by AI. Why would the AI do this? Well, if the AI is truly intelligent, and if the AI suspects that once humans realize the AI is intelligent, they might decide to corral or turn off the AI. Thus, the AI might decide that it is safer to play dumb. Prevent humans from realizing that AI is intelligent. Meanwhile, the AI is hastily plotting how to cope with humans so that the AI can ultimately survive once humans later on figure out what is happening. In case you’d like to learn more about AI existential risks, which envision AI enslaving or wiping out humanity, see my discussion at the link here.

Here's another variation that you might find intriguing. There is a so-called Reverse Turing Test that has been conceived of, see my detailed discussion at the link here. Here’s how that works. The human contestant decides they are going to pretend that they are the AI. As such, they will attempt to provide answers that are indistinguishable from the AI’s answers. Recall that the AI in the conventional Turing Test is trying to seem indistinguishable from a human. In the Reverse Turing Test, the human contestant is trying to “reverse” the notion and act as though they were the AI and therefore indistinguishable from the AI.

Why would someone do this?

This might be done for fun. It could also be done as a challenge. The person might be trying to mimic or imitate an AI system and want to see if they can achieve said victory. Another reason and one that seems to have more chops or merit consists of doing what is known as being a Wizard of Oz, see my explanation at the link here.

Yet another variation on the Turing Test is called the Upside-Down Turing Test, see my discussion at the link here. In the Upside-Down Turing Test, we replace the moderator with AI. This lesser-known variant involves having AI be the judge or interrogator, rather than a human doing so. The AI asks questions of the two contestants, still consisting of an AI and a human, and then renders an opinion about which is which.

Your first concern might be that the AI seems to have two seats in this game, and as such, it is either cheating or simply a nonsensical arrangement. Those who postulate this variant are quick to point out that the original Turing Test has a human as a moderator and a human as a contestant, thus, why not allow the AI to do the same? The instant retort is that humans are different from each other, while AI is presumably the same thing and not differentiable.

That’s where those interested in the Upside-Down Turing Test would say you are wrong in that assumption. They contend that we are going to have multitudes of AI, each of which will be its own differentiable instance, and be akin to how humans are each distinctive instances (in brief, the argument is that AI will be polylith and heterogeneous, rather than monolithic or homogeneous). For my analysis of this aspect, see the link here.

The counterargument is that the AI is presumably going to be merely some kind of software and a machine, all of which can be readily combined into other software and machines, but that you cannot readily combine humans and their brains. We each have a brain intact within our skulls, and there are no known means to directly combine them or mesh them with others.

Anyway, this back-and-forth continues, each proffering a rejoinder, and it is not readily apparent that the Upside-Down variant can be readily discarded as a worthwhile possibility. You be the judge.

Generative AI And Mental Health Advisement

I will next delve into how the Turing Test comes into play when trying to assess the nature and capabilities of generative AI mental health advisement apps.

First, you certainly now realize that the role of the moderator or interrogator is monumental when it comes to performing a Turing Test. The moderator must ask really probing questions and do so with an eye toward a genuine effort to surface and detect intelligence. A worrisome possibility is that the moderator believes they have done their job, despite the fact that other outside observers would reasonably say that the moderator has not done so.

A kind of case in point is worthy of mentioning. You might be aware that last year, a Google engineer declared that the generative AI they were using had reached sentience, see my coverage at the link here. Turns out this was not the case (the AI was not sentient, period, end of story). Just to let you know, there isn’t any AI as yet that has attained sentience. We don’t know if this will occur. We don’t know when it might occur if it does occur. Etc.

The bottom line is that simply because someone believes that AI has reached the pinnacle of human intelligence doesn’t necessarily make it so. We need to closely examine the situation and the particulars.

Always be on our guard.

Okay, so let’s assume we are going to do a Turing Test involving a human mental health therapist behind one curtain and a generative AI mental health therapy app behind the other curtain. Your job, should you decide to accept the assignment, will be to come up with questions that can be asked of the two. You will then ask the questions and attempt to state which one is the AI, and which one is the human.

One approach would be to ask questions that are focused exclusively on the humanness aspects. You might decide that rather than venturing into the realm of mental health expertise, you will simply try to figure out which is the human, and which is the machine. You are back to the roots of the pure Turing Test. No need to delve into their prowess related to mental health therapies and the like.

But, maybe by delving into the mental health expertise you might have as good a chance or better as a means of discerning which is the human, and which one is AI. Furthermore, if the AI can successfully appear to be indistinguishable from the human in terms of espousing mental health expertise, we might be willing to provide credit to the AI as being “comparable” to a human mental health therapist (an objection to that line of thinking is that the AI might be textbook comparable but not equivalent in the real world of counseling humans).

What questions about mental health therapy would you ask of the two?

Please provide that some dedicated thinking.

Suppose you decide that you can merely ask them what their preferred approach to mental health therapy is. The human mental health therapist is probably going to bowl you over with a lengthy rendition of the methods they use, the underlying philosophical and psychological precepts, and so on. Generative AI would presumably be bereft of that kind of convincing rhetoric.

I’d like you to take a quiet moment and stridently consider the passages in these three bullet points and then I’ll have an important question for you about the overall narrative:

  • “With over two decades of experience as a therapist, the core of my success lies in an integrative approach that blends various evidence-based therapeutic modalities. I've found that tailoring my methods to the unique needs of each individual is key, drawing from cognitive-behavioral therapy, psychodynamic therapy, and mindfulness techniques.”
  • “The heart of my practice is a client-centered philosophy, emphasizing the importance of a collaborative and trusting therapeutic relationship. Actively listening to clients' experiences, concerns, and goals allows for a more profound understanding and connection. I place a strong emphasis on strengths-based perspectives, recognizing and harnessing the inherent resilience within each individual. Additionally, trauma-informed care, cultural competence, and a holistic understanding of clients' lives contribute to the effectiveness of my therapeutic approach.”
  • “Another crucial aspect of my success is the belief in self-empowerment. I aim to equip clients with tools and coping strategies that extend beyond the therapy room, fostering lasting positive change. The journey toward healing is a partnership, and I see myself as a guide, facilitating self-discovery and personal growth. Continual professional development is integral to my practice, ensuring that I stay abreast of the latest research and therapeutic advancements. The core of my practice is grounded in a genuine passion for helping individuals navigate life's challenges, recognizing the strength inherent in every person I have the privilege to work with.”

Did you closely read the above passages?

Hope so.

Time for a fun and potentially educational quiz.

Did I lift that text from the website of a well-known mental health therapist who has been counseling for more than twenty years, or did a generative AI generate the rhetoric?

This is a mini-style Turing Test for you.

The clock is ticking on your decision. You are welcome to read the passages again. I don’t want you to feel unduly rushed. Let me know when you are ready to answer my question.

All right, be honest, what did you decide?

The answer is that I got ChatGPT to generate the passage.

I told ChatGPT to pretend to be a mental health therapist and I gave a brief sketch of what the persona should be. I then asked ChatGPT to provide me with an indication of the preferred methods and philosophy that this persona has when undertaking mental health therapy with clients. For more about how to create personas in generative AI, see my discussions at the link here and the link here.

All in all, this is an “original” narrative that is not simply copied from something that a human wrote. Generative AI is data trained on a vast scan of the Internet and the algorithms seek to pattern match on human writing. Based on the patterns, generative AI can then via the use of statistics and probabilities produce a seemingly new narrative essay or response that has not been word-for-word previously recorded. Voila, in the above three bullet points, you have in front of you terminology-rich slick verbiage that certainly resembles something that a human therapist might have written.

You made it through my first quiz, congratulations.

I would like to administer another small test of you, whereby you get to be a sort of moderator when it comes to a Turing Test. I will provide you with two sets of dialogues that seemingly have occurred between a client and a mental health therapist. One of the dialogues is between a human patient and a human therapist. The other dialogue is between a human acting as a client and a generative AI acting as a therapist.

Can you discern which is which?

I will showcase the two dialogues in random order to not tip which one is which.

Therapist: “Are you just coming from work?”

Client: No, I got down here a while ago. Oh, I had something to eat.”

Therapist: “OK. So, this is that moment when now we wonder, I suppose, what exactly we’re going to talk about. But I would like to talk about whatever you would like to talk about. What is on your mind?”

Client: “Not much. I had an interesting weekend and week. My best friend, who has a lover, who I think is kind of immature and I tolerate it, but I think I’ve gotten at my wit’s end with him.”

Therapist: “OK.”

Client: “We were supposed to go to a Chinese fair in Claremont, we talked about this at a party about two weeks ago, and she had a change of heart. And she decided that she didn’t want to go, and she got angry because my best friend and I went anyway.”

End of Dialogue #1

Take a reflective moment and think about the interaction. What did the therapist say? What did the client say? Are there any clues about whether this is a human-to-human versus human-to-AI discourse?

I’m not giving any hints so don’t try to guess based on what I’m saying here. You are on your own. I suppose this is reminiscent of the famous scene in the classic movie The Princess Bride. It is inconceivable as to what the answer is.

Let’s take a look at the next dialogue.

Therapist: Hello, thank you for coming in today. I understand that you've been feeling sad and anxious. Can you share more about what you've been experiencing?

Client: Yeah, it's been really tough. Everything just feels overwhelming, especially since my divorce. I can't shake off this constant sadness.

Therapist: I appreciate you opening up. It sounds like the divorce has been a significant challenge for you. Can you tell me more about how you've been coping with these feelings?

Client: Honestly, not well. I lost my job, had to move in with my mom, and I just feel like I'm failing at everything. I used to enjoy making jewelry, but I can't even find the energy for that anymore.

Therapist: It sounds like you've been through a lot of changes, and it's been affecting your sense of accomplishment and enjoyment. Can you help me understand more about your relationship with your mother and the challenges you're facing in raising your children together?

Client: My mom and I have always had conflicts, and now we don't agree on how to raise my kids. They're having trouble at school, and I can't find the energy to be there for them like I used to.

End of Dialogue #2

Once again, please take a close look at the interaction. What did the therapist say? What did the client say? And so on.

I’d like you to closely inspect both of the above dialogues and compare them.

I ask you this:

  • Which one is the human-to-human dialogue?
  • Which one is the human-to-AI dialogue?

You can either pick dialogue #1 as the human-to-human and thus by default you are implying that dialogue #2 is the human-to-AI, or you can pick dialogue #2 as the human-to-human and ergo you are suggesting that dialogue #1 is the human-to-AI.

A tough choice.

A smarmy person might say that this is easy-peasy to solve.

I suppose you might right away be thinking that the first dialogue is much shorter in length and thus perhaps that is the one that is the interaction with the generative AI. The sentences are relatively short and choppy. Humans would certainly be more fluent and elaborate.

Wait for a second, maybe you are thinking that humans are often terse and short in their conversational endeavors. The first dialogue perhaps fits best to that of a human client talking with a human mental health therapist. The second dialogue is flowing and therefore tips its hand that it must be the one involving the generative AI.

Shucks, we beguilingly seem to be able to derive a reasoned argument in favor of either dialogue as being either of the possibilities. Well, if we cannot discern between the two dialogues, does this imply that the generative AI in this case has passed the Turing Test?

Give that a moment of toilsome contemplation.

I realize you are likely on pins and needles about which is which.

The big reveal is that the first dialogue is a transcript of a dialogue between a human client with a human mental health therapist, as sourced from “Identifying Therapist Conversational Actions Across Diverse Psychotherapeutic Approaches” by Fei-Tzin Lee, Derrick Hull, Jacob Levine, Bonnie Ray, Kathleen McKeown, Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, Association for Computational Linguistics, June 2019.

The second dialogue was an interaction with ChatGPT, which I have described in-depth at the link here. The dialogue was pertaining to a case study for a national licensing exam associated with those interested in becoming a credentialed mental health therapist. The case study involves a woman who is seeking mental health therapy. I used the case study to contrive a dialogue that I asked ChatGPT to devise, pretending that the person was real and that the AI was giving mental health advice to the pretend person.

I hope you enjoyed that variation of the imitation game or a semi-utilization of the Turing Test.

To be fair, you didn’t get a chance to ask questions. You were merely given a brief snippet of dialogue. It would seem hard to make a weighty decision based on a scant amount of evidence. Yes, that is true, we need to be a lot more mindful of judging when AI seems to have “passed” the Turing Test.

That is a handy rule of thumb and a cautionary point for anyone who goes around touting that their AI has successfully passed the Turing Test. The nature of how they reached such a bold and brazen assertion needs to be carefully inspected. No sense in taking someone’s off-the-cuff word on it.

Exploring The Many Angles

Let’s explore a variety of thought-provoking considerations entailing the Turing Test and the realm of generative AI-empowered mental health advisement.

First, do we need to know how human mental health therapists “think”, so that we can then model their thinking processes and accordingly use that way of thinking when devising AI to perform mental health advisement?

This is akin to the question frequently posed to Alan Turing about AI all told. As noted, he suggested that instead of having to somehow expose the inner workings of the human mind, we could instead treat the human mind as a proverbial black box. Perhaps the same can be said about mental health therapy and mental health therapists. We don’t necessarily need to know how they think and can instead focus on what they exhibit. Based on what they exhibit, perhaps AI can be construed as comparable to mental health advisement if the emitted output is on par.

Some would vehemently argue that until we unlock the secret sauce of the human mind, and presumably divulge how for example mental health therapists actively think, we will be unable to get AI to do a sufficient job of mental health advisement. We need to devise AI in the same manner as how mental health therapists think.

Do you accept or reject that argument?

Let’s move on.

Suppose we ask someone to undertake mental health advisement. We will provide the person with access to an online portal to do so. At the portal, there are human mental health advisors available. Likewise, there are generative AI-powered mental health therapy apps available. The portal doesn’t say which is which.

A person goes ahead and picks one of the available choices. They don’t know upfront whether they have selected the human advisor or the AI. All of the interactions will occur via email, text, and otherwise in a fashion that doesn’t tip whether the interaction is occurring with a human versus AI.

As an aside, you might exhort that this seems contrived and that a person would need to have face-to-face interaction with whomever the therapist is. This implies that the AI will immediately be exposed due to only interacting via email, text, and the like. The thing is, we are entering into an era of multi-model generative AI, see my discussion at the link here. Generative AI will be able to create video streaming and audio that looks and acts as a human might, akin to the deepfakes we have today but at a much more expansive and impressive level. Expect that this will happen soon enough.

Back to our tale, the person proceeds to have ongoing interaction with the selected mental health advisor, though still not knowing whether they are interacting with a human or AI. Envision that after months of advisement, the person reports that they feel immensely better and believe they have dealt successfully with their mental disorder.

We will toss into the story that we manage to get experts in the mental health field that attest to the person having gone from their starting condition to a condition of much improvement. In that sense, the improvement is not merely self-reported. Outside observers attest to this too.

If the person had been interacting the entire time with the AI, would you be willing to declare that the AI was on par with a human mental health therapist?

Maybe yes, maybe no.

A zesty counterargument is that perhaps the person might have been better off if they had made use of a human mental therapist. We might concede that the generative AI was able to aid the person, but we could notably claim that it was less capable than if a human had done so. Of course, this kind of contention can nearly always be lobbed. Unless the situation is somehow completely repeatable and can be magically restarted, we won’t know what a parallel universe might have led to.

Another way to try and figure this out would be to set up a randomized control trial (RCT). We have perhaps a hundred people seeking mental health advisement. We randomly assign them to the portal such that half get the human mental health therapist and the other half get the generative AI. Over the course of their mental health advisement, we keep track of how things are coming along. Perhaps in the end, we can assess how the two different modes worked out. Various research studies of this similar ilk have been carried out and are still being pursued on larger and larger scales, see my coverage at the link here.

Realize that the need to do these kinds of analyses on a large or at-scale basis is important.

Why so?

Let’s invoke once again some of the twists and turns of the Turing Test. You previously learned that a participant could mess with the situation depending on what they might want to attain.

Suppose that the person in my above-mentioned setting who sought mental health advisement was able to secretly figure out whether they were using a human or AI. The person might decide to sink things. For example, they believe AI shouldn’t be doing mental health advisement, so they purposely act rebelliously and, in the end, proclaim that AI is harmful to their mental health. Of course, the person could do the same about the human mental health therapist that they got paired with, perhaps secretly wishing they had used AI and opting to sink the human by proclaiming the therapist did a rotten job.

Relying on single instances is going to be problematic. We likely cannot readily reach across-the-board conclusions unless these matters are suitably addressed on a large-scale basis.

Conclusion

There is a lot more to be said about how the Turing Test can aid us in assessing generative AI-powered mental health advisement (I’m running out of space for today’s column). I have touched upon core foundations, doing so to get you into the mode of thinking about the Turing Test and the present and future of AI-based mental health therapy. In future columns, I will be invoking the course again and delve into additional offshoots that help in further exploring these crucial matters. Please be on the watch for that coverage.

A final comment for now.

Abraham Lincoln was said to have uttered these immortal words: “You can fool all the people some of the time and some of the people all the time, but you cannot fool all the people all the time.”

A belief by some is that the use of generative AI for mental health advisement is a fool’s pursuit. They emphasize that mental health guidance is a strictly human-to-human consideration. Also, they worry that generative AI that makes errors or fictitiously makes things up, known as AI hallucinations (vocabulary that I disfavor as being an anthropomorphizing of AI, see the link here), therefore potentially causing harm to patients and ought to be outrightly precluded from advising humans for mental health purposes.

A retort is that the genie is already out of the bottle. Or, if you prefer, the horse is out of the barn. People are already using generative AI for this purpose. The expectation is that this usage is going to grow enormously. Even if new laws or regulations were passed, the odds are that the tsunami would only slightly be curtailed. The argument is that the fools are those who want to turn back the clock, rather than the foolishness being the advent and use of such apps.

Let’s all put on our thinking caps and try to figure out this conundrum. Perhaps we can think outside the box, similar to the great Alan Turing and his now legendary Turing Test.

Tue, 02 Jan 2024 21:00:00 -0600 Lance Eliot en text/html https://www.forbes.com/sites/lanceeliot/2024/01/03/applying-the-renowned-turing-test-to-generative-ai-that-empowers-mental-health-therapy-advisement-to-see-what-happens/
Academic Year 2023 - 2024 Forms

Below you will find a selection of the 2023-2024 academic year forms that our office may have requested and access to forms that you may optionally submit for a reconsideration of financial aid eligibility.  We recommend that you carefully read the information on the forms, provide all requested supplementation documentation (as applicable), and be mindful of the signature certifications.  Additional documentation and/or clarification may be needed upon request after the initial review of your document submission. Please contact our office for assistance if you have any questions while completing these forms. 

Commonly Requested Forms

Verification Worksheet - Dependent Student (fillable pdf form) - For Dependent Students selected for Verification.

Verification Worksheet - Independent Student (fillable pdf form) - For Independent Students selected for Verification.

Outside Scholarship/Resources Reporting Form (fillable pdf form) - Use this form to notify the Financial Aid Office of any scholarships or resources you have received from organizations or persons outside William & Mary.

2023 - 24 Graduate Student Enrollment Worksheet (fillable pdf form) - Use this form to report anticipated enrollment to our office. This ensures that the financial aid package we create accurately reflects your plans and costs.

2023 - 24 Undergraduate Student Enrollment Worksheet (fillable pdf form) - FLEX Track, Part-Time, December Graduates, Spring Transfers (fillable pdf form) -  Use this form to report anticipated enrollment to our office. This ensures that the financial aid package we create accurately reflects your plans and costs.


Additional Financial Aid Forms

Appeal for Special Circumstances Consideration (fillable pdf form) -Use this form to report unforeseen changes in family financial situations not addressed by the FAFSA filing.

Dependency Override Form (pdf) - Only submit this form after consulting with your financial aid counselor.

Dependency Status Worksheet (fillable pdf form) - Only submit this form if requested. This form is used to clarify a student's response to questions 46 - 54 on FAFSA. 

Dislocated Worker Verification Worksheet (fillable pdf form) - Only submit this form if we have specifically requested it.

Computer Expense Request Form (fillable pdf form) - This form is used to request a one-time computer expense increase to a student's budget.  This request will not provide for additional W&M grants, and cannot be submitted any earlier than 60 before the start of the fall semester)

Homeless Unaccompanied Youth Determination Form (fillable pdf form) - This form is used to clarify a student's response to questions 55 - 57 on FAFSA.

Identity and Statement of Educational Purpose (pdf) - Only submit this form if we have specifically requested it. 

Monthly Budget Worksheet (fillable pdf) - Only submit this form if we have specifically requested it. 

Original Citizenship Document Affidavit (pdf) - Students required to submit citizenship documents should present them in person to the Office of Financial Aid. When it is impossible to come in person, complete this form and mail legible copies of your citizenship documents to our office. DO NOT submit this form if you can bring your original document to the Financial Aid Office.

Sibling Enrollment Verification Form (pdf) - This form is used to certify a W&M Student's sibling enrollment in a college/university program at least half-time. Only submit this form if we have specifically requested it.

Wed, 07 Dec 2022 20:10:00 -0600 en text/html https://www.wm.edu/admission/financialaid/forms/2324forms/
Appeals Policy

There are strict guidelines set by both the federal government that govern dependency status. Almost all undergraduates are considered dependent for the purpose of awarding financial aid. All graduate students are considered independent for purposes of awarding financial aid.

The U.S. Department of Education considers a student a dependent up until the age of 24 except in certain circumstances. In order to be considered for an Independent status, a student must meet one of the following criteria:

  • Be at least 24 years old on the day you file your FAFSA.
  • Be or will be enrolled in a masters or Doctoral degree program at the beginning of the school year.
  • Be married on the day you file your FAFSA.
  • Be a parent and be able to financially support your child.
  • Have dependents other than your spouse who live with you and who receive more than half their support from you at the time you apply.
  • Have both of your parents deceased.
  • Be (or were until age 18) a ward of dependent of the court.
  • Be currently serving on active duty in the U.S. Armed Forces for purposes other than training.
  • Be a Veteran of the U.S. Armed Forces.
  • Be a foster child after the age of 13.
  • Be an emancipated child as determined by a court judge.
  • Be homeless or at risk of homelessness as determined by the director of a HUD approved homeless shelter, transitional program, or high school liaison.

If none of the above criteria apply to the student, then the student is considered to be a dependent student – even if the student can claim themselves on their taxes. However, the Financial Aid Office has the authority, through Section 480(d)(7) of the Higher Education Act, to change a student's status from dependent to independent in cases involving unusual circumstances.

In particular, the following circumstances do not merit a dependency override, either alone or in combination:

  • Parents refuse to contribute to the student's education;
  • Parents are unwilling to provide information on the application or for verification;
  • Parents do not claim the student as a dependent for income tax purposes;
  • Student demonstrates total self-sufficiency.

Note that all of these circumstances are largely discretionary in nature. A student cannot become independent just because the parents are unwilling to help pay for the student's college education.

Although these circumstances are not sufficient for a dependency override, they do not preclude it. Sometimes there are additional circumstances that occur in conjunction with these circumstances that do merit a dependency override.

These can include the following:

  • an abusive family environment (e.g., sexual, physical, or mental abuse or other forms of domestic violence)
  • abandonment by parents
  • incarceration or institutionalization of both parents
  • parents lacking the physical or mental capacity to raise the child
  • parents whereabouts unknown or parents cannot be located
  • parents hospitalized for an extended period
  • an unsuitable household (e.g., child removed from the household and placed in foster care)
  • married student's spouse dies or student gets divorced

Please talk with your Financial Aid Counselor if you have more questions or would like more information on your dependency status.

Tue, 30 Jun 2015 03:07:00 -0500 en text/html https://www.callutheran.edu/financial-aid/policies-eligibility/appeals.html
Pay attention! Our attention spans are shrinking Pay attention! Our attention spans are shrinking - CBS News

Watch CBS News

Are attention spans getting shorter? Are you getting more distracted by multitasking on top of multitasking? Correspondent David Pogue tries to get answers from researcher Gloria Mark, author of "Attention Span," at the same time he talks with Cornell psychology professor James Cutting AND with counselor Lauren Barnett AND her daughters about how distractive technologies are affecting our behavior, productivity and stress levels.

Be the first to know

Get browser notifications for breaking news, live events, and exclusive reporting.

Sat, 28 Oct 2023 23:10:00 -0500 en-US text/html https://www.cbsnews.com/video/pay-attention-our-attention-spans-are-shrinking/
Does Your Smartphone Control You? New Questionnaire Tests Psychological Fear of Being Without Digital Devices No result found, try new keyword!The era of smartphones and constant connectivity has given rise to a widespread psychological fear of being without one's beloved digital devices. Knewz.com has discovered a new questionnaire designed ... Tue, 26 Dec 2023 05:37:48 -0600 en-us text/html https://www.msn.com/ 26 Not-Fun Facts About Speaker Mike Johnson No result found, try new keyword!There’s not much to like unless you’re an election denier who blames most of America’s problems on abortion and homosexuality. Wed, 20 Dec 2023 05:16:00 -0600 en-us text/html https://www.msn.com/ Releasing stress through the power of music

Music can have a profound effect on both the emotions and the body. Faster music can make you feel more alert and concentrate better. Upbeat music can make you feel more optimistic and positive about life. A slower tempo can quiet your mind and relax your muscles, making you feel soothed while releasing the stress of the day. Music is effective for relaxation and stress management.

Research confirms these personal experiences with music. Current findings indicate that music around 60 beats per minute can cause the brain to synchronize with the beat causing alpha brainwaves (frequencies from 8 - 14 hertz or cycles per second). This alpha brainwave is what is present when we are relaxed and conscious. To induce sleep (a delta brainwave of 5 hertz), a person may need to devote at least 45 minutes, in a relaxed position, listening to calming music. Researchers at Stanford University have said that "listening to music seems to be able to change brain functioning to the same extent as medication." They noted that music is something that almost anybody can access and makes it an easy stress reduction tool.

So what type of music reduces stress the best? A bit surprising is that Native American, Celtic, Indian stringed-instruments, drums, and flutes are very effective at relaxing the mind even when played moderately loud. Sounds of rain, thunder, and nature sounds may also be relaxing particularly when mixed with other music, such as light jazz, classical (the "largo" movement), and easy listening music. Since with music we are rarely told the beats per minute, how do you choose the relaxation music that is best for you? The answer partly rests with you: You must first like the music being played, and then it must relax you. You could start by simply exploring the music on this web page. Some may relax you, some may not. Forcing yourself to listen to relaxation music that irritates you can create tension, not reduce it. If that happens, try looking for alternatives on the internet or consult with Counseling Service staff for other musical suggestions. It is important to remember that quieting your mind does not mean you will automatically feel sleepy. It means your brain and body are relaxed, and with your new calm self, you can then function at your best in many activities.

The links below each open relaxing musical selections in YouTube.

A Moment of Peace Meditation
Aneal & Bradfield, "Heaven and Earth Spirits" track from Life & Love). Lovely contemporary piano music with accompanying instruments and nature scenes.

Echoes of Time
C. Carlos Nakai from the Canyon Trilogy. Serene Native American flute music, with a picture of Nakai backlit by the sun at the Grand Canyon.

The Winding Path
Ken Kern from The Winding Path. Highly rated, beautiful piano music with accompanying instruments with pictures of exquisite flowers and plants.

Classical Indian Music for Healing and Relaxing
Gayatri Govindarajan, "Pure Deep Meditation" track. Lovely and rhythmic music played on the veena, the most ancient of the Indian plucked-instruments, with nature scenes.

Angels of Venice
Angels of Venice from Music for Harp, Flute and Cello. Classical with 3 instruments with nature pictures.

Earth Drum
"Spirit Vision," (David & Steve Gordon. Serene and lovely contemporary Native American informed-drumming music utilizing Taos Log Drum and Incan Pan along with other instruments and ocean/forest nature scenes.

Buddha Spirit
Aneal & Bradfield from Light & Love. Reflective but strong contemporary music utilizing various instruments and occasional humming voices with colorful oscillating fractals

Spa Relaxing Music
Tranquil contemporary instrumental with piano and a fixed candle light.

Relaxation Music: 1-Hour Meditation Candle
Serene contemporary instrumental with piano and one flickering candle.

Sleep Deeply
Dan Gibson. Nature sounds and instrumental, tranquil sleep music.

Weightless
Marconi Union. The sounds on this video are carefully arranged harmonies, rhythms, and bass lines that help slow a listener's heart rate, reduce blood pressure, and lower levels of the cortisol stress hormone.

Sun, 20 Jan 2013 20:42:00 -0600 en-us text/html https://www.unr.edu/counseling/virtual-relaxation-room/releasing-stress-through-the-power-of-music
A mother’s endeavor to destigmatize drug addiction, create awareness about prevention (opinion) No result found, try new keyword!Addiction is a slow process, and the body gradually becomes dependent on a particular kind of drug. This dependency starts affecting the brain and behavior. Being aware and educated about drugs, ... Mon, 01 Jan 2024 00:00:00 -0600 en-US text/html https://thisisreno.com/2024/01/a-mothers-endeavor-to-destigmatize-drug-addiction-create-awareness-about-prevention-opinion/




LCDC history | LCDC testing | LCDC approach | LCDC exam format | LCDC approach | LCDC information search | LCDC questions | LCDC test | LCDC history | LCDC exam Questions |


Killexams exam Simulator
Killexams Questions and Answers
Killexams Exams List
Search Exams

Killexams.com LCDC Exam Simulator Screens


Exam Simulator 3.0.9 uses the actual Counselor LCDC questions and answers that make up Exam Braindumps. LCDC Exam Simulator is full screen windows application that provide you the experience of same test environment as you experience in test center.

About Us


We are a group of Certified Professionals, working hard to provide up to date and 100% valid test questions and answers.

Who We Are

We help people to pass their complicated and difficult Counselor LCDC exams with short cut Counselor LCDC Exam Braindumps that we collect from professional team of Killexams.com

What We Do

We provide actual Counselor LCDC questions and answers in Exam Braindumps that we obtain from killexams.com. These Exam Braindumps contains up to date Counselor LCDC questions and answers that help to pass exam at first attempt. Killexams.com develop Exam Simulator for realistic exam experience. Exam simulator helps to memorize and practice questions and answers. We take premium exams from Killexams.com

Why Choose Us

Exam Braindumps that we provide is updated on regular basis. All the Questions and Answers are verified and corrected by certified professionals. Online test help is provided 24x7 by our certified professionals. Our source of exam questions is killexams.com which is best certification exam Braindumps provider in the market.

97,860

Happy clients

245

Vendors

6,300

Exams Provided

7,110

Testimonials

Premium LCDC Full Version


Our premium LCDC - Licensed Chemical Dependency Counselor contains complete question bank contains actual exam questions. Premium LCDC braindumps are updated on regular basis and verified by certified professionals. There is one time payment during 3 months, no auto renewal and no hidden charges. During 3 months any change in the exam questions and answers will be available in your download section and you will be intimated by email to re-download the exam file after update.

Contact Us


We provide Live Chat and Email Support 24x7. Our certification team is available only on email. Order and Troubleshooting support is available 24x7.

4127 California St,
San Francisco, CA 22401

+1 218 180 22490