AI tools like Alex are genuinely powerful. That power comes with genuine responsibility. This page isn't a list of rules — it's a framework for thinking about how to use AI well. The workshop is built on these principles, and we think they're worth stating plainly.
You are responsible for everything you produce.
When you submit a report, send an email, file a brief, or publish an article — your name is on it. It doesn't matter whether AI helped draft it. The professional standard doesn't change because the tool changed.
This means you must read, verify, and own everything AI helps you create. "Alex wrote it" is not an acceptable explanation for an error — to your organization, your clients, or yourself.
Verify before you trust.
AI systems can produce confident-sounding text that is factually wrong. This is not a bug that will be patched — it is a structural characteristic of how large language models work. They generate plausible text; accuracy is a byproduct, not a guarantee.
Critical claims — statistics, citations, legal precedents, medical information, financial figures — must be independently verified before use. The more consequential the claim, the more rigorous the verification.
Protect what is not yours to share.
When you type into an AI tool, you are sending data to an external system. Most enterprise AI tools process this on remote servers. Some retain inputs for model improvement. Policies vary and change.
Treat AI tools with the same discretion you would apply to any external communication:
- Personal data — Do not paste names, contact details, health records, or financial information about individuals without authorization.
- Confidential business information — Client data, deal details, personnel matters, unreleased financials, and trade secrets belong inside your organization.
- Material non-public information — Never input MNPI into any AI tool. This is a legal matter, not just a policy one.
- Student data — Educators and students alike should follow their institution's data classification policies.
Be transparent about AI involvement.
Different contexts have different norms — and those norms are still evolving. Academic institutions have explicit policies on AI use. Employers are developing them. Professional associations are debating them.
The default principle is transparency: when AI meaningfully contributed to work you are presenting as your own, acknowledge it. This applies to:
- Academic submissions (check your institution's policy — many now require disclosure)
- Published work and journalism
- Client deliverables where the client has reasonable expectations about your process
- Internal work products where AI involvement would affect how they're evaluated
Using AI as a thinking partner — asking it to critique your draft, stress-test your argument, or help you research a topic — is generally different from using it to generate the work wholesale. The distinction matters.
Recognize and correct for bias.
AI models reflect patterns in their training data — which means they reflect the biases in that data. These biases are not always visible and they are not always corrected by the model.
This shows up in concrete ways: AI systems can represent some groups better than others, replicate historical inequities in recommendations, produce language that encodes assumptions about gender, race, or ability, and perform differently across languages, cultures, and demographics.
Your responsibility as a practitioner is to:
- Ask who might be harmed or excluded by the output you're creating
- Review AI-generated content actively for implicit assumptions
- Not treat AI output as neutral just because it's machine-generated
- Seek diverse perspectives when AI is being used to make consequential decisions about people
Keep humans in the loop for consequential decisions.
AI can help you think through decisions faster and more thoroughly. It should not replace judgment in decisions that materially affect people's lives, livelihoods, access to resources, or legal rights.
Hiring decisions, performance evaluations, medical treatment, legal representation, financial advice, and similar high-stakes domains require human judgment, professional accountability, and legal compliance — regardless of how capable AI tools become.
Use AI to grow, not to bypass growth.
The most important risk in this workshop is the risk nobody talks about: using AI in a way that atrophies the skills it's supposed to help you apply.
If AI writes your first draft every time, you stop developing a writing voice. If AI structures every argument, you stop building structuring intuition. If AI answers every research question, you stop developing your own depth of knowledge.
The professionals who will thrive with AI are the ones who use it to accelerate and amplify genuine expertise — not to substitute for it. Alex is designed to be a thinking partner: it should make your thinking sharper, not replace it.
Special Notices for Health & Human Services
The seven principles above apply to every discipline. But if you work in healthcare, allied health, counseling, social services, or any field where your decisions directly affect patients, clients, or vulnerable populations — these additional guardrails are non-negotiable.
AI cannot make clinical decisions.
AI is a study tool and reasoning partner. It is not a diagnostic tool, a prescribing system, or a substitute for clinical judgment. Every treatment decision, medication dose, ventilator setting, and patient care plan must be validated by a licensed professional or verified against your institution's current protocols.
Applies to: Nursing EMT & Paramedics Respiratory Therapy Dental Hygiene Pharmacy Technology Surgical Technology Physical Therapy Medical Assisting Radiography Veterinary Care Healthcare Professionals
Never enter protected health information into AI.
Patient names, dates of birth, medical record numbers, diagnoses tied to identifiable individuals, and any other PHI as defined by HIPAA must never be entered into any AI tool. This includes clinical notes, lab results, and imaging reports that could identify a patient. When practicing with AI, always use de-identified or fictional scenarios.
Applies to: All clinical and healthcare disciplines Health Information Technology Medical Lab Technology
Verify every drug dose, route, and interaction.
AI can generate medication information that looks authoritative but may be outdated, incorrect, or not aligned with your agency's formulary. Drug dosages, weight-based calculations, contraindications, and interaction data must be verified against your program's pharmacology references, your agency's protocol manual, or a licensed pharmacist. Never use an AI-generated drug reference card in a clinical setting without verification.
Applies to: EMT & Paramedics Pharmacy Technology Nursing Respiratory Therapy Dental Hygiene Medical Assisting Veterinary Care
AI cannot assess risk or provide crisis intervention.
AI cannot evaluate suicide risk, assess danger to self or others, or make safety determinations. If you work with clients in mental health, social services, or any counseling capacity — crisis assessment is always a human clinical responsibility. Tools like the Columbia Protocol and Stanley-Brown Safety Plan require professional judgment that AI cannot replicate. When a client is in crisis, follow your agency's protocols and contact appropriate emergency services.
Applies to: Psychology & Counselors Social & Human Services Nursing EMT & Paramedics
Protect client confidentiality beyond HIPAA.
Social workers, counselors, and human services professionals are bound by ethical codes (NASW, ACA, APA, NBCC) that impose confidentiality obligations beyond federal law. Case details — even when "de-identified" — can sometimes identify clients in small communities or specialized programs. When using AI for case documentation, supervision preparation, or service planning, anonymize aggressively: change demographics, locations, and circumstances that could lead to identification.
Applies to: Social & Human Services Psychology & Counselors Early Childhood Education
Regulatory and legal information changes constantly.
Tax codes, building codes, OSHA regulations, pharmacy law, insurance billing rules, and scope-of-practice definitions change regularly. AI training data has a knowledge cutoff and may not reflect the most recent amendments, rulings, or state-specific variations. Always verify regulatory guidance against current official sources — your state licensing board, IRS publications, OSHA standards, or your clinical program's compliance officer.
Applies to: All regulated professions Accounting Paralegal Construction Management
Vulnerable populations require extra diligence.
When your work involves children, elderly patients, individuals with disabilities, people in crisis, or anyone with limited ability to advocate for themselves — the consequences of AI-generated errors are amplified. AI may not account for the specific developmental, cognitive, or situational factors that affect these populations. Apply heightened scrutiny to any AI output that could influence care, services, or decisions affecting vulnerable individuals.
Applies to: Early Childhood Education Social & Human Services Nursing Psychology & Counselors Physical Therapy Veterinary Care Sign Language Interpreting
Further Reading
These frameworks and guidelines shaped how we think about responsible AI in this workshop:
- Microsoft Responsible AI Principles — Fairness, reliability, privacy, inclusiveness, transparency, and accountability
- Google Responsible AI Practices — Practical guidance on building and deploying AI responsibly
- NIST AI Risk Management Framework — The US federal standard for AI risk governance
- EU AI Act Overview — The European regulatory approach to AI safety and rights
- HHS HIPAA Resources — Federal patient privacy and health information protection standards
- NASW Code of Ethics — Ethical standards for social work practice including confidentiality and technology use