At a laid-back campus event, students are invited to ask questions about AI governance to Taiye Lambo, founder of the Holistic Information Security Practitioner Institute (HISPI), and Dr. Tuboise Floyd of Human Signal, an AI governance researcher and podcast host. Speakers emphasize that AI literacy is a civic and professional survival skill: employers expect workers to critically evaluate AI outputs, frame AI literacy as risk awareness, and focus on asking the right questions rather than becoming data scientists. Discussion covers deepfakes and short-form media, overreliance on AI (including a lawyer citing fabricated ChatGPT case law), "never blindly trust, always verify," and the need for continuous auditing, accountability, and an "honest human in the loop," especially in clinical and environmental contexts. Students are advised to build strong domain knowledge, think critically, pursue internships, and invest in AI governance and risk certifications over tool-specific training.
⏱️ Chapters 00:00 Welcome and Setup 00:52 Meet the Experts 01:57 Taiye on Governance Focus 02:53 Dr. Floyd Background and Podcast 04:39 Open Forum Begins 05:02 AI Literacy for Careers 07:23 Threat or Opportunity Poll 10:01 AI Literacy Beyond STEM 10:49 Spotting Deepfakes in Shorts 15:35 Using AI Without Replacing Learning 16:14 Lawyer Case and Overtrusting AI 18:08 Never Blindly Trust — Verify 19:06 Wikipedia Analogy and Real Risks 20:31 Business Ethics Reality Check 21:06 Continuous Audits in Clinics 21:28 Human in the Loop Matters 22:04 Environmental AI Data Gaps 23:13 Public Trust and Accountability 23:33 Honest Human Oversight 25:28 Tokens and Hallucinations 26:51 Bias in Training Data 27:56 Interviewing in the AI Era 30:28 AI Disruption and Generational Shift 33:21 High-Stakes AI Blind Spots 36:02 Rapid Fire Career Advice 41:03 Closing and Next Steps
GUEST
Taiye Lambo, Founder & Chief Artificial Intelligence Officer Holistic Information Security Practitioner Institute (HISPI) 🔗 https://www.hispi.org 🔗 https://projectcerebellum.com LinkedIn: linkedin.com/in/taiyelambo
Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.
Support Human Signal — help fuel six months of new episodes, visual briefs, and honest playbooks. 🔗 https://humansignal.io/support
Every contribution sustains the signal.
ABOUT THE HOST
Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.
PRODUCTION NOTES
Host & Producer: Dr. Tuboise Floyd Creative Director: Jeremy Jarvis
Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.
Full transcript available upon request at support@humansignal.io
TAGS
AI Governance, Risk Management, Innovation, Project Cerebellum, CIO Leadership, AI Ethics, Military Technology, Cybersecurity, AI Policy, Enterprise AI, Government AI, Technology Leadership
Thank you for coming to this event. It's going to be a bit laid back — this is your opportunity to ask questions of our experts. For everyone online, we have a ton of food you're missing out on, but I wanted to create space for students — many of whom are seniors heading into a new frontier — to ask real questions about AI governance.
Taiye Lambo is with us. He is the founder of HISPI — the Holistic Information Security Practitioner Institute — a think tank focused on information security and governance. We also have Dr. Tuboise Floyd, founder of Human Signal and an AI governance researcher and podcast host.
Thank you, Dr. Floyd — both Dr. Floyds — for having me. I want to be upfront: I am not an AI expert, and I think most chief information security officers and chief information officers, if they're being honest, would say the same. We're all still figuring this out.
What I have focused on for the past three years inside HISPI is AI governance — because governance is a critical, often overlooked component of any AI strategy or program. I can speak intelligently about governance, even if AI as a whole is far too vast a field for any one person to claim mastery.
Dr. Tuboise Floyd:
I'm the other half of the Dr. Floyd team — Dr. Tuboise Floyd. My career spans 15+ years in systems engineering. I've supported the federal government, worked with Tom Frieden at the CDC, and I'm a trained systems theorist and social scientist by background. That training helps me understand the social implications of large language models (LLMs) and where AI is heading.
I run Human Signal and recently completed a rebrand. The podcast is now called The AI Governance Briefing, and we are trending nationally and internationally. We recently hit the top 100 of all time in leadership and management — and we've only been doing this for one year.
Here's what I want you to take into every interview, every co-op, every job search:
1. Employers across every sector now expect employees to critically evaluate AI outputs — not just use them. You may be digital natives who can pick up tools quickly, but the market demands more than usage. It demands judgment.
2. Frame your AI literacy as risk awareness, not tool proficiency. This signals maturity. It shows you're thinking about downstream consequences, not just immediate outputs.
3. Humanities and social science students — this matters for you too. AI shapes narratives, content moderation, and policy recommendations. If you're in foreign service or policy, start thinking about AI as a policy recommendation engine and understand the bias layer embedded in those recommendations.
4. The goal isn't to become a data scientist. The goal is to ask the right questions. When AI shapes a decision that affects someone's life, your job is to ask: Is this the right tool? Will it cause harm? What assumptions were baked into the model?
Taiye Lambo:
Quick show of hands — do you think AI is a threat to your professional career, or an opportunity?
[Audience poll: 3 students raised hands for threat; 6 for opportunity.]
AI is here to stay. The question isn't whether to adopt it — it's how to leverage it while mitigating the risk. That's exactly why we're having this conversation.
I want to pose a framing rather than a question: AI literacy is not a STEM skill. It is a civic and professional survival skill.
The ability to evaluate AI outputs critically — to know when something is real, when it's fabricated, when it's biased — is not a computer science skill. It belongs to every major, every discipline.
Student (anonymous):
I've been watching AI-generated videos show up on my grandmother's iPad. She's following them like they're real. It's getting used for manipulation — and that affects older generations especially. Being able to differentiate between what's real and what's not is critically important.
Dr. Tuboise Floyd:
I'll give you a personal example. I saw a TikTok video of an alligator crashing through a Walmart in Delray Beach, Florida — a place genuinely known for gators. The video looked completely real. It was only when I noticed that the man standing near the child ran in the wrong direction and someone by the door never moved that I caught it.
That's the quality level we're dealing with. These aren't obvious fakes anymore.
Student (anonymous):
Especially with short-form content, you're not in analysis mode. You're consuming for a quick hit of dopamine or shock value. You don't slow down to analyze details. The tells are there — like a doctor holding a syringe using the palm of his hand in a physically impossible way — but you only catch them if you deliberately slow down and look.
Dr. Tuboise Floyd:
Georgetown is developing that skill across every major — critical thinking. And that transfers directly to risk management and AI assessment. Is this the right tool for the right job? Will it do any harm? Those are the questions that make you valuable.
There's a highly publicized case that illustrates this perfectly. A lawyer — in a lawsuit involving Meta — used ChatGPT to generate case law citations. The judge accepted those citations. The lawyer's client won the case.
Then the judge discovered the case law was entirely fabricated by ChatGPT. The decision was reversed, and the lawyer was disbarred.
The failure wasn't just that the AI made things up — LLMs hallucinate, that's a known limitation. The failure was that the lawyer never verified the output. When pressed, the AI even confirmed the cases were real. The lawyer took that at face value.
The lesson: garbage in, garbage out. Even if the model is capable, if it's operating on corrupted or fabricated data, and there's no human verification step, you are exposed.
In cybersecurity, we use the phrase "trust but verify" when managing access and potential bad actors. With AI, the standard must be higher: never blindly trust — always verify.
How many of you cited Wikipedia in high school before you fully understood it was crowdsourced? [Hands raised.] Right.
Back then, Wikipedia was open for anyone to edit. The quality of a citation depended entirely on who wrote it. AI tools are similar — they're useful starting points, not authoritative sources. Use them to jumpstart your thinking, frame an idea, or rough out a concept. Then verify.
But in science and finance, the stakes are not an essay grade. When you're funding a project based on AI-generated analysis, you may be making decisions that affect people's health and lives. That's when the "never blindly trust — always verify" standard becomes a professional and ethical obligation.
AI in clinical settings needs continuous auditing — not just a one-time validation. Think about how many times Dr. Floyd has you re-audit your lab results or re-check your equipment. The same discipline applies to AI systems deployed in healthcare.
Human in the loop is not a slogan. It is a structural requirement for when the model fails.
I add one word to the "human in the loop" principle: honest. You need an honest human in the loop.
You can have a human in the loop who simply rubber-stamps a bad output and blames the system when something goes wrong. That's not governance — that's liability deflection.
What we need are people who understand the ethical implications of the decisions they make using AI outputs, who take accountability when the system produces harm, and who have the courage to say "I made a mistake" so it doesn't happen again.
For high-risk AI systems — where lives are on the line — having an honest human in the loop is not optional. It is the last line of defense.
Species classification and climate models carry the same risks as clinical AI when trained on historically undersampled ecosystems. If the training data doesn't represent the full range of what exists in the natural world, the model's outputs will reflect those gaps — and decisions made from those outputs will carry those errors forward.
Think about what's happening right now. When systems that were tracking environmental data get turned off, AI models continue operating on incomplete baselines. When you enter your career, you will need to ask: What data was this model trained on? What's missing from that dataset? What decisions is it informing, and are those decisions sound?
Public trust erodes when AI-generated health outcomes are wrong and there is no accountability mechanism. When an AI system produces a harmful result and the answer is simply "the machine made an error," that is not an accountability mechanism. It's an evasion.
People need to be able to answer the hard questions: How did this happen? What guardrails should have been in place? What will we change?
If we train people to be honest and accountable — not just technically proficient — then the human in the loop becomes a genuine safeguard rather than a procedural checkbox.
Here's a practical thing most people don't know: approximately 3,000 tokens are equivalent to two pages of text.
When you're in a long AI session — especially in coding — and you start noticing increasing errors or outputs that seem to drift, the model may be running low on tokens and beginning to hallucinate. You can actually ask the model directly: "Are you hallucinating?" A well-calibrated model will acknowledge the issue. At that point, stop the session and start a new chat with a fresh context window.
Understanding the mechanical limitations of these tools — context windows, token limits, hallucination triggers — is part of what it means to use AI critically rather than naively.
When AI models were trained, who was in the training data? And who wasn't?
This is not a theoretical question. There was a case where a medical device company was preparing to release a product that failed to scan accurately on Black and brown patients because the model had been trained exclusively on data from Caucasian subjects. They had funding, momentum, and a go-to-market plan — and someone eventually had to ask that question.
As science students, as future researchers and practitioners, you will encounter AI tools that have been validated on populations that don't represent the people you serve. Your job is to notice that and raise it.
Many of the students here are juniors and seniors heading into careers. How do you talk about AI governance and responsible use in interviews?
Taiye Lambo:
Don't go into interviews as anti-AI. The market doesn't have room for that posture right now. Instead, show that you are on board with adoption — and then demonstrate that you understand how to use AI safely.
The balance view is this: "I can help you leverage AI as a tool, and I know we have to do it safely." That framing signals both technical awareness and mature judgment.
If a candidate says AI is all bad and they want nothing to do with it, that tells me they're not current with the real world. But a candidate who can articulate both the benefit and the risk? That's someone I want on my team.
We've seen CEOs of major companies — Coca-Cola, Walmart — step down and openly say they're making room for the next generation because they feel they can't move fast enough with AI. When that level of leadership is saying, "I need to step aside," entry-level candidates who are AI-fluent and governance-aware have a genuine competitive advantage.
You have an advantage that you may not fully appreciate: you are at the beginning of the AI era, just as my generation was at the beginning of the internet.
When my grandmother — a 30-year educator — had to start entering student records into a computer instead of using carbon copy forms, that was the day she retired. Thirty years of institutional knowledge, thirty years of expertise in working with diverse student populations, left because of a technology transition.
That knowledge loss is a governance problem. As you enter your careers, respect the institutional knowledge that predates AI. The people who were doing the work before the tools existed know things the models don't.
The most dangerous person in the room is the one who doesn't know they are using AI to make a high-stakes decision.
We've seen cases where military decisions were made using outdated mapping data — in one reported instance, a school was struck, and investigation pointed to an outdated Google Maps reference in a targeting chain.
I'm not a defense expert, but even I know Google Maps isn't always current. Every time I look up my own address, I see cars from years ago in the street view.
The question isn't whether technology is useful. It's whether the humans in the chain are applying the same critical scrutiny to AI-assisted decisions that they would apply to any other high-stakes judgment call.
Dr. Tuboise Floyd:
That person — the one doing the targeting and planning — needed to stop and say: let's put eyes on this and cross-check it against our own notes. That is the power of observation. That is the scientific discipline that Dr. Floyd teaches in her courses.
I can see that AI is already doing a lot of what entry-level analyst roles used to require. Firms like JP Morgan and Morgan Stanley are hiring former OpenAI engineers to build internal systems. What's the best way to stand out three to four years from now?
Dr. Tuboise Floyd:
My real answer: get books. Read deeply in your subject area. Put down the phone, buy the books, absorb the knowledge yourself.
Your brain is the most powerful computer you will ever own. You have to train it on good data. If you train it on bad data — low-quality content, AI-generated summaries of AI-generated summaries — your judgment will reflect that.
Sit with senior people in your field. Respect institutional knowledge. Understand the solutions, not just the tools. And invest in big-picture, critical thinking — understanding outcomes, not just outputs.
Stop treating AI like a new shiny toy. AI is integrated, and it is disrupting. The question is whether you are positioned to govern its use or simply subject to it.
Taiye Lambo:
Internships. As many as you can get before you graduate.
By the time you've completed four or five internships, you have real-world context for what the industry is actually looking for — not what it says it's looking for in a job posting. You understand where trends are moving. You have names on your resume that open doors.
And invest your certification dollars wisely. Don't spend them getting certified in a specific AI tool. Tools change. Spend them on AI governance, risk assessment, and risk management programs. That knowledge transfers across every tool, every platform, and every industry.
Thank you to Taiye Lambo and Dr. Tuboise Floyd for joining us today and for helping us think through things we may not have considered before.
Taiye Lambo:
You can connect with me on LinkedIn — I'm the only Taiye Lambo on the platform. If you find another one, that's a deep fake. Reach out anytime with questions.
You have a genuinely exciting decade ahead of you. The landscape is going to change dramatically. Go into it with the cup half full — and master the tools before they master you.
Dr. Tuboise Floyd:
It has been an honor to come back to the Hilltop. You can find me on LinkedIn at Tuboise Floyd — I'm the only one.
In closing: keep your eye on AI governance training and certification programs. Spend your professional development dollars on governance, risk assessment, and analysis — not on tool-specific certifications. That investment will compound over your entire career.
Taiye Lambo is the Founder and Chief AI Officer of the Holistic Information Security Practitioner Institute (HISPI), a think tank focused on AI governance and information security practitioner development. He is also the founder of Project Cerebellum.
Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.