Kogod School of Business
Jeff: Hello, and welcome to AI-Curious. My name is Jeff Wilser. I'm a journalist, I'm a human, and I am curious about AI. And two years ago, David Marchick came on this show and made a bold claim. He was going to turn the Kogod School of Business into the first AI-first business school in the country. Not an AI lab, not a computer science department, a business school, the kind of place where people study marketing and finance and entrepreneurship.
He was going to rebuild that all around AI. At the time, I remember thinking, okay, that's interesting but ambitious, let's see. So nearly two years later, we invited him back. And here's what I can tell you. Something happens. Bloomberg picked up the AI-first label. Poets and Quants, which is basically the Bible of business school coverage, recognized Kogod for the most comprehensive AI transformation of any school in the world.
90% of their faculty are now teaching with AI. They've launched an AI major, a minor, a graduate degree. But the numbers aren't the interesting part. The interesting part to me is what's actually happening in the classrooms. There's a professor, for example, who was initially AI skeptical. He's been teaching for decades. He's seen deans come and go.
He was not really into it at first, but he eventually got so curious, he threw out every textbook and rebuilt his entire course on entrepreneurship using only prompts. There's a negotiations class where students practice against AI counterparts with different personalities, including one that is just a complete jerk. They're using AI everywhere for a more interactive, engaging back and forth with students.
We get into all of this. And of course, the elephant in the room, the cheating question, we go there. And David handles this more honestly than most administrators I've heard. We talk about why blue books are making a comeback. Remember those? David was quite open and candid about the existential threat that AI poses to universities themselves and whether the entire business model of higher education is about to shift underneath everyone's feet.
David also shares how he uses AI in his own life, including building a custom training program for riding actual Tour de France routes for his 60th birthday, which might be the most compelling personal use case anyone's shared on the show. So with that, please enjoy my conversation with David Marchick, the Dean of the Kogod School of Business. This is a second time on AI Curious, and in my unbiased opinion, it's even better than the first. David, welcome back to AI Curious.
David: Thanks for having me back. I hope I don't mess it up a second time.
Jeff: There is no messing up at any time. So you were last on almost two years ago, I believe, in the spring of 2024. We all know that AI moves at warp speed. So we'd love to hear an update on what you guys are doing now and what has changed. What's kind of the current state of play of AI in the program?
And I'm wondering about it like formally, informally. I'm guessing there are formal things you're doing with a program with AI, but also things you're seeing kind of just organically happening with AI. Let's start with the formal. What's the latest for the program?
David: Okay, well, thanks again for having me. Since we have spoken last time, I'd say we've had a huge transformation at our business school.
In 2024, we were kind of in year one and a half of our now three and a half to four year journey on AI. And I would say in year three, we hit a tipping point where in year one, we were kind of experimental. A few people raised their hands, some reluctantly to say, all right, I'll try this. Year two, we had a little more experimentation.
And somewhere in the middle of year three, we had a tipping point where basically we got to 90% of our faculty were infusing AI into the curriculum. We have an AI major, a minor, we have a grad degree in AI, and we've infused AI in every part of the core curriculum in electives.
So that we hope that every single graduate of our business school, whether they're undergrad or grad, major or minor, will be AI literate and AI fluent by the time they graduate. So it's been really an incredible journey.
To get to the second part of your question, I actually think the most important thing that we've done is that we've created a culture of experimentation, innovation, and one where failure is okay. That's very unusual in general, and it's even more unusual in an academic university setting.
So honestly, I've gone from knowing exactly what's going on, knowing how many classes have infused AI in them, to really not knowing everything that's going on, and having professors and staff come up to me all the time and saying, I just tried this, and it turned out great, or I just tried this and it failed, but I'm going to try something else.
So we have faculty and staff just making stuff up all over the school, in class, out of class, in operations. And that's the most important accomplishment I think we've made is really creating a culture.
Jeff: I love it. I want to dive deeper into all of that on the curiosity front. That is, I mean, again, the title of this podcast is AI Curious, and I've really adopted that as my North Star.
And when I speak to businesses and leadership team, one of my core messages is curiosity, right? Establishing that culture of experimentation, even play, because one thing I think in the business world, certainly I'm guessing education too. AI adoption, what often makes or break it is less about the actual hard tech and more about the culture and the mindset.
And curiosity is the key to kind of having a mindset that works for all of this. Just to level set a bit, how would you plainly state what you were the first university to do, right? What's this to kind of give listeners who are not as following the ins and outs of higher education? Kind of what is now your program known for? Of like, okay, we were the first ones to do X, or we were kind of out ahead of the curve and Y.
David: So I think there are a couple of ways to approach infusing of AI and adoption of AI. One is what one would commonly think of as, you know, someone from MIT or Caltech or Carnegie Mellon who are going to be the greatest programmers in the world going to Google or OpenAI or Microsoft and being phenomenal programs. We have some of those, but that's not our DNA.
What we've been the first to do is really infuse in a comprehensive way AI into every program that we offer. So if you're a marketing major at Kogod, there have been marketing majors for decades, you know, Procter & Gamble and all these companies hired marketing majors. If you now come, you learn kind of your father's marketing degree, but you also learn AI tools and tricks.
You become AI curious. And then we've also infused everything. We've doubled down on everything that AI can't do. So critical thinking, working in teams, oral communications, professionalism. So it's basically the fundamental skills plus AI plus all of the what used to be called soft skills, but I would call power skills.
And Bloomberg called us the first AI first business school in the country. And Poets & Quants, which is kind of the main magazine, which covers, main publication that covers business schools, gave us an award for having the top AI program in the country. And their citation said that we had undertaken the most comprehensive AI transformation of any school anywhere in the world. So basically it's infusing AI into everything we do.
Jeff: Let me just kind of react to one thing you said about AI curious. And I guess I didn't really know this when we did the first podcast and you were farther ahead. That's the most important thing. In fact, we're now doing a lot of corporate trainings and trainings for large nonprofits. And the CEO of, you know, about a company that has a billion, I'm sorry, $110 billion in assets asked me the other day, well, what are we going to get out of the training?
And I said, don't think of it in terms of you're going to know this trick or this tool or this strategy. You're going to become AI curious because the AI tools you're using today will be obsolete.
David: I only take 10% of a cut for that.
Jeff: No, only 10%. That's a very reasonable.
David: I'm licensing. It's the most important thing. It's the AI culture and being AI curious because in six months, the AI tools we're using today are going to be obsolete.
Jeff: Totally. I couldn't agree more. I have a very similar message when I do training of my own that, yes, we can, we'll certainly cover some tools, but to get to root yourself to anchor in, okay, I know this tool. You'll be set for three weeks and then you'll be in trouble having to, yeah, completely agree. So I do want to address right out of the gate what I'm guessing a lot of listeners will be asking and first instincts.
I think when you, most of the news coverage about AI education has tended to focus on kids are cheating. We all know that. We know that they are, their homework is being done by AI. AI is writing papers. Teachers have said that. Parents say that. Students say that. How have you grappled with the cheating question?
And, and we'll, of course we'll cover the flip side of the coin and the ways AI is enriching education. But I think millions and millions of people are very concerned about how AI is like just melting down the way we have assessed students for decades and decades and decades. How are you guys handling that?
David: Okay. First, let's assume that people cheated before and they're going to cheat now. And there's also a continuum of what's cheating.
I know when I was in high school, I was not a great reader. My wife would say I'm still not a great reader, but I struggled in literature. If you gave me a 300 page English lit book, I couldn't read it. I read the cliff notes. Was that cheating? I don't know.
Jeff: But also quick, for those listening audio only, not watching YouTube, David has a wall of books and books and books and books behind him. So I feel like he's being a smidge modest here, but go on, go on.
David: They're history books. They're not literature books. Okay. I loved history and I still do. Yeah. Wealthy people get their kids tutors. Wealthy people get their kids college counselors who help write college applications. Okay. AI makes it much easier, but you know, before AI became so ubiquitous, you could go on to the internet and buy an essay and change it a little. So, okay.
Now, given that, but also knowing that AI does make it easier to do research, cut and paste, submit an essay. Okay. We start by teaching what's wrong with AI and the appropriate and inappropriate uses of AI. So we start by breaking down everything that's wrong with AI before talking about and teaching about what's right about AI. We require disclosure statements.
So if you turn in your homework and it's a statistics, you know, class, we're going to require you to disclose how you used AI and often will require you to show us or copy and paste the prompts you use.
Blue books are back in some ways. And we had one professor gave 30 oral exams to students. So it does make it easier. There is a continuum of students. Some who'd say, I'd never do this inappropriate. Some who'd say, I'm going to take advantage of AI and use it inappropriately. They were there before.
So, but we've tried to approach it by kind of ethics, disclosure, and then there are tools. So for example, in Canvas, if you're taking a test and a professor puts it in Canvas, you're not allowed to leave that screen. So you can't go to another screen, put a question into AI, come back, paste it. And so there are tools, but it's a challenge.
Jeff: One of the prior guests on the show describes how he thought about AI and learning in a framework I found really interesting. I want to share your thoughts on. This is Victor Varnedo, who is a cartoonist for The New Yorker, as well as a comedian, overall smart guy.
And he thought a lot about, okay, you can use AI as a crutch to be lazy or to enrich your learning. An example, he thought of it as like, imagine if you have the ability to talk to Einstein. And you can say, hey, Einstein, help me understand physics. Help me understand general relativity. Or you can say, Einstein, do my homework. Right?
And so it's an incredible tool. How you use it, though, influences, is it going to be enriching or debilitating? I'm sure you saw a study from MIT a while back that found a clear link between excessive chat GPT usage and cognitive decline. Right? So there were students who were, when using it too much, they basically got lazy.
So I guess, what are you, given that framework of the Einstein to do your homework or help you better understand physics, kind of how are you approaching a program to nudge students and professors to be more in the help me understand physics mode and not do my homework mode?
David: Okay, great question. First of all, everybody's making this up as they go along.
Jeff: Totally.
David: It's so new that you have to be AI curious. And so our faculty, they're making stuff up as they go along. Second, we are actually, we're partnering with Google on a number of projects to measure assessment of learning outcomes. So we have some classes where we're giving students some AI tools and some we're not. And at the end of the semester, we'll see which student group does better.
Third is we're seeing actually better outcomes anecdotally from students. They're producing more professional documents. They're producing more professional presentations. The quality of their research is deeper. And they're using critical thinking skills more. That's the benefit of the proper use of AI.
And then the final thing is we have some faculty who are literally reinventing the future of education. Let me give you an example. We have a professor named Tommy White. He's a professor of entrepreneurship. I would say he was not AI curious to start. He was AI resistant. And, you know, there are a few professors that say I've been around for 30 years and I've seen a lot of deans and I'll outlast, you know, every dean. So some weren't going to do anything.
Tommy, after being initially reluctant, became super AI curious. And he reinvented a leadership class and an entrepreneurship class, got rid of all books, all articles, and the class only has prompts now. So what does that mean? Let's say that the assignment is for you to create a business or a podcast. Let's take, I want to create a podcast on AI curious because I don't, I think there's a niche.
So he'll start with a prompt that says, what problem are you trying to solve? What niche are you trying to fill? That's how you start a business. Then the AI tool that he's created, the set of prompts, the kind of, it's a conversation, leads the student down a path of discovery. All right. Well, Jeff Wilner already has one. How is yours going to be different? What markets are you going to fill?
And then it gives you a bunch of reading, podcasts, YouTube. It curates your own reading and research instead of having a book on entrepreneurship or an article with case studies. It creates tailored, personalized learning for every student so that the student can really do their own research and figure out what they're interested in. And I think that's actually the future of learning.
And more importantly, it transformed the role of kind of the educator, the teacher from being in the front of the classroom and disseminating information, being the authority figure to being the coach, the mentor, supporter, the encourager to support student learning. And that's an example where AI is really driving spectacular results because it's appealing to students' AI curiosity. And if you're curious, you're going to learn more.
Jeff: Yeah, I think that's a, I love how concretely you put that. And that to me is the great potential promise of AI education, which excites me. The idea that this could be, as you alluded to earlier, massively more inclusive, right? And currently, you need to have a lot of money to get one-on-one tutoring personalized, tailored to you.
But the idea of having potentially, literally, to a certain scale, a billion-plus students that are able, like, across the world in all kinds of demographics, being able to get custom instruction that encourages more active learning, old-school Socratic dialogue of question-answer back and forth. Like, that's really exciting. So it's cool to see. I'm sure many listeners, when I first heard you say, we got rid of all the books.
They're like, not a great start. Let's get rid of books.
David: And a big part of entrepreneurship is failure. And so maybe after the process, you know, the students said, oh, Jeff Wilser already has the best AI curious podcast. I'm going to try a different idea. You know, like, that market's saturated.
Jeff: What a great plug. I appreciate that, David. So, and I guess to your point, though, it's easy to imagine then this mix of where you're having this back and forth.
The AI is suggesting different avenues for research. And also, maybe there's a component of then the student actually builds something, codes something, vibe codes something, right? It says, oh, cool. Okay. I'm now going to put this theory into action. And there's no learning like doing and then being able to get rapid feedback on that, right? What might have taken in the past, you build some, it might take you a month to build some software tool.
Then your professor gives feedback on that. There's only two weeks. The whole process is months. This could happen like literally over one late Saturday Celsius-fueled binge coding evening. That could all happen right now.
David: A hundred percent.
Jeff: How has the role of being a professor shifted with this? How are your, what's the feedback have you gotten from them?
You gave us incredible kind of mini case study of Tommy White. But what, more generally, how are, what are they doing more of? What are they doing less of? What new skills are required of educators in this kind of AI first environment?
David: Okay. Great question. So like any organization, you're going to have your top 10, top 20, top 30% who are going to be innovators.
They're off to the races and they've reinvented their classes. You're going to have the bottom X percent who are going to say, I've been doing the same thing for 20 years and I'm not going to change. It's the middle that we're really trying to push. Let me start with the top. So I'll give you a couple examples, kind of riffing off what you just talked about.
We have students, non-quantitative, non-programming, non-STEM-y students, writing code, creating apps, creating their own software tools. And they'll graduate with a portfolio of not only the skills that they're studying, but they'll have on their resume: I created these three, four or five apps.
So in our marketing class led by a professor, Kelly Frias, they created different software tools to generate social media posts to measure the effectiveness of them. And they did it by coding. In another professor's class, Giannis Spiridopoulos, he teaches a finance class. They're using various Google products to code.
And he told me that a wrestler, who's a grad student in finance, created a software tool that creates correlation between success in the NCAA wrestling tournament and financial outcomes 20, 30, 40 years later.
Because there are a lot of wrestlers on Wall Street and this student is really smart, but he's not a coder, but he's coding. And so it's just incredible. We have a negotiations class where another professor, Alex Mislin, she has created negotiation partners that are AI partners with different personalities.
So we've all negotiated against someone who is creative and a good kind of solution-oriented negotiator. So that's one counterpart. We've also negotiated against people who are jerks and every point is a fight. And so she's created these personas and now has students practicing their negotiation techniques against AI and it's improved their skill sets.
So most of our faculty have become students again, and they're relearning a lot. And that creates a lot of excitement, a lot of excitement because they're AI curious.
Jeff: I love that. One of the biggest stories this year or in the last 12 months, both in enterprise and the AI landscape in general, of course, is AI agents.
And more recently, OpenAI’s “open claw”–style agentic frameworks. To what extent have your students and professors been injecting agentic workflows in this? Are they starting to build agentic tools? Is “open claw” on your radar? Are there people building “open claw”?
Have you had any of your students’ agents be posting on Notebook, the social media site only for agents? I'm wondering how agents have impacted the landscape so far.
David: We definitely have classes that are teaching students to use agentic workflows. I'll give you a great example.
One of our students created an agentic workflow to make it easier to register for classes and then understand which classes you need to get to graduation. So, you know, sadly universities—I'm not a long-term academic—but universities have created structures and processes that make it difficult.
If there's a four-step process that one could create, universities create an eight-step process. And so students actually said, God, registering to graduate is a total pain. You have to go through 12 steps. And they kind of created a more streamlined process. We're approaching agentic AI very much from a management perspective as opposed to a technological perspective.
It's really understanding workflows. So if you think about it, in business school you have marketing, accounting, finance—agentic AI is really a management tool in our view because you need to understand the workflow. And how do you reduce those workflows and add agentic tools to streamline and eliminate repetitive or unnecessary tasks?
And so like we discussed earlier, the technology is getting easier and easier to program. And for agentic AI, it's really understanding the process, and then you can apply the tool easily.
Jeff: David, what are the biggest risks you identify here? And then what kind of guardrails do you put on there? Here's one example I'm going to throw out there and you can riff on this.
To me, you know, so much just from my memory of education, university has been about the human connection, right? The human connection with professors or your classmates. And it is easy to squint and imagine a scenario where everyone is so kind of locked in talking to their AI tutors. And granted, the professor helped orchestrate all this.
But if there's just excessive amounts of just talking to AIs and not being in physically with other humans, is there some human relationship element lost in school? That's one risk that occurs to me. What are other kind of, what are the biggest risks that you're having to navigate?
David: So I think there are two significant risks.
One is the risk that you identified, which is the human touch. You know, we teach that you have to start with the human, then go to AI and finish with the human. You have to apply human judgment, human editing, you know, a human touch to whatever the output is. And so we've doubled down on, as I said, infusing group projects,
which have been in every MBA program, but now first-year, first-semester freshmen work in groups. And they're like, this is really hard because, you know, one kid's showing off. Another kid shows up late. Another student, you know, doesn't want to listen.
Jeff: Welcome to life, kids.
David: Welcome to the last four years of my life, right?
Jeff: Exactly.
David: So we've infused that. I'd say the other big risk, the biggest risk, is universities not changing, not preparing their students for the future. And universities are facing four major headwinds right now. Number one is the demographic cliff. Fewer Americans were born 18, 19 years ago. So the freshman cliff.
Number two is the number of international students is really collapsing based on visa policy and also based on kind of a perception of intolerance for people not born in the United States. Number three is the federal funding environment. And number four is AI. And all of these things are coming at universities all at once.
And in my view, it's a great time to kind of take advantage of a crisis and reinvent ourselves. So I think the second biggest problem is universities not changing to adapt to this new environment.
Jeff: Yes. Let's go there now. I appreciate you mentioned the kind of broader macro issues out there. You know, I do wonder if you—you prefaced this by saying, I think what you guys are doing is really smart.
I'm biased for obvious reasons. I'm aligned with so much of this. If you extrapolate out further, though, and you're able to see, wow, it's incredible what we're able to do with AI, having to create this tailored, custom experience for a student.
And it is easy to imagine some version of that that can be scaled, that can be deployed elsewhere, perhaps without the tenured professors and so on. Right. So I guess what I'm getting at is, is there an existential risk or does this change in a fundamental way down the road
the entire business model of what higher education could look like—when students, as they get better and better, and these what began as kind of primitive chatbots continue to develop and might feel more and more lifelike? And I'm sure we're pretty close to having, at very little cost, dynamic real-time videos. Just like the Zoom call we're on right now, where you're interacting with someone over video, it feels like talking to a professor, right?
As that gets better and better, how are you seeing kind of how the actual tectonic plates are going to shift when it comes to universities in general?
David: I do think there's an existential threat to all forms of education—secondary schools, community colleges, universities—because AI changes everything. It creates knowledge at anybody's fingertips.
And so someone that is AI curious can figure something out with AI. And so universities basically have three tools. You can make the product better. You can lower the price. Or you can improve the marketing. What we're trying to do is make what we're doing better,
to help our students more and help our students prepare for an AI economy where, you know, over their lifetime, they're going to have to reinvent themselves many times.
And so I do think this is an existential, you know, there are existential threats to traditional education. And we're trying to run hard to stay ahead of it.
Jeff: I think that is quite wise, just from my layman's perspective here. Switching from macro to a little more micro on the student perspective. What have you seen as far as any kind of shifts in how students are reorienting either what they're studying or what skills they think they need to have to get that job, whether undergraduate or graduate?
And part of the context of my question is, I hear a lot of teenagers ask this, parents ask this: as AI continues to get better and makes it so much easier to just display and demonstrate skills, and as every profession is being, in some ways, transformed in different ways, what the hell do you major in? What do you focus on? You know, what makes sense for your career?
I've spoken to experts recently who said, well, actually, career-wise, you're a lot better off becoming a plumber than you are a programmer now, right? And so, like, whereas 10 years ago—I'm not saying I endorse that necessarily—but the point is, there's a lot of debate, open questions of where it makes sense to focus on for higher education and a career, even within a business school. What have you seen as far as, what shifts have you seen there?
And kind of, how has this changed in your eyes?
David: So we're still in early days of AI. If you look at AI adoption in most companies and large and small organizations, it's still very, very shallow. I do think we're going to see that accelerate. And that will have a significant impact on changing the nature of work. In the same way that I talked about, you know, we're seeing our students just—they're better.
They're doing the fundamental work, but their presentations are better. Their research is better. Their writing is better. I think more is going to be expected of the average employee. I also think that AI breaks down barriers to entry so that we're going to have more, you know, individual entrepreneurs and people creating small companies to be individual kind of self-employed workers.
I don't think anybody really knows what's going to happen with the labor market other than it's going to change. You know, Dario Amodei, who runs Anthropic, has this, you know, doom prediction of 30% of white-collar workers going to be wiped out. But they just issued a study yesterday that said there's no evidence that white-collar jobs are actually being wiped out.
So I guess what I'm advising young people is pick what you're interested in in the same way. It honestly doesn't matter what you study. I'm a history major with a policy degree and a law degree in a business school. Pick something where there's a Venn diagram overlap between what you're good at and what you like doing.
And then add an AI component to it and double down on the professionalism and communication and group work. I think it's as important today for a pre-med student to have bio and labs and all the things that one does to get into a good medical school. And then if you added an AI minor on your resume, you're going to not only be much more attractive to that med school, but you're going to be more equipped to be a great doctor.
Because, you know, if you graduate in 2026, by the time you're actually practicing medicine, it's going to be 2033, 2035. AI is going to have totally transformed healthcare by then. And if you are AI curious now, you're going to do much better in med school, in your residency, and when you're practicing.
Jeff: As we wind down, last few questions for you.
What are the elements of the university experience or education that you think, okay, this is a core, this thread, this will not change no matter how good AI gets? Are there any kind of like non-negotiable, here is some turf that we're protecting? We don't want AI touching this aspect of the student experience or the way students learn. Are there any kind of sacred points like that for you guys?
David: I don't think it shifts the requirement that you have the fundamental skills that you needed 10 years ago, 5 years ago, 20 years ago, or, you know, when my parents graduated college. If you're a bio major, you need to know the fundamentals of biology. If you're a history major, you need to know how to read, to write, to make an argument, to have analytical thinking.
If you're a finance major, you need to know all the tools of finance, discounted cash flow, and, you know, understand the fundamentals of finance. But I think that's not enough in today's age, and you need to add AI and then all the kind of what people used to call soft skills. So I don't think you can get away with not having the fundamentals. You need more in today's society.
Jeff: I would have co-signed that, and yes, I would give a yes answer to that.
And I think also the mentioned soft skills, I would double down on that personally. I think that as AI gets more and more saturated, our ability to carry on conversations, to ask questions, to connect with people—I think that kind of uniquely human edge will matter more and more going forward. As you mentioned, David, your background is not historically in academia. So I'd love to kind of put your older hat on for a moment.
And what advice—you know, you've done an incredible job creating this AI curious culture at the program, at Kogod—what advice would you give to other leaders out there who are struggling to implement AI? And maybe they're in the early days, maybe they already have some agentic pilot program that's like, it's going okay, but it's kind of wobbly and stuff. You know, what advice do you have to leaders out there?
David: So I think that the best organizations have kind of a top-down and bottom-up push. Top-down, it's not really learning the kind of technology of AI, it's being AI curious.
It's a CEO or a CFO or a Chief Operating Officer, Chief Marketing Officer, HR, demonstrating that they're using AI every day as much as they can and sending the signal to their organization that they should actually be using AI as well. So from the top, I think it's really cultural, sending the signal that we want our organization to be AI curious, AI experimental, and AI focused.
Then the people at the bottom or the middle of the organization will respond, and they're actually the ones that are going to figure out the repetitive tasks, the efficiencies, the workflows that can be eliminated or streamlined to create more productivity and a better outcome.
So when we are talking to organizations and providing training, it's really: create a top-down culture and a bottom-up kind of training and workflow so that you have an AI curious organization.
Jeff: Love it. Final question for you, David. Tell us a bit about how you're using AI in your own personal life these days.
What works either for productivity hacks and also just for funsies at home?
David: I use it all day, every day. Okay, so I'll just give you today. I'm giving a presentation in a few weeks and I upload a bunch of documents, and in Notebook LM, I said, you know, give me a narrative and a structure to make this argument.
And it created a PowerPoint of 15 pages or so. I'm not going to use that PowerPoint, but there were elements of the narrative that I'm definitely going to use in the flow and some kind of concepts from it. I am interviewing three CEOs in two weeks on AI and the future of health. I know nothing about healthcare. My wife is a healthcare expert. So I uploaded, you know, the McKinsey latest on healthcare, the Bain.
I uploaded 10 different reports and I asked it to give me a summary and some questions and kind of threads. And then a personal one, I love bike riding and I'm going to, I'm training for a Tour de France ride this summer. I'm going to ride some of the routes that they do on the Tour de France with 10 friends for my 60th birthday.
And I created a training program to get ready for that, where basically I said, here's the date I'm going. Here's where I am. Tell me what I need to do on a week-by-week basis to train, to be ready to ride this route. And AI gave me a training program—Perplexity did. So I checked that off. I did my workout this morning and then I threw it into AI.
I also keep kind of a to-do list in AI where I get an email every morning at 5 a.m. with, here are the things you have to do. And throughout the day, I'm just dictating to my phone saying, add this or I finished this. And then every morning I get an email saying, here's what you need to do today. And I check off as much as I can, but oftentimes I see the same things for many days in a row.
Jeff: These are great actionable things you do. Very concrete.
I love it. You just gave—you betrayed some of my own secrets to this podcast research and preparation on my end as well, as far as pumping in documents, information, studies, asking for synthesis. No, I guess we're all fantastic. Congrats on all you and the program have done and what you're doing now. You guys are truly at the frontier edge of AI education. So it was great talking to you. David, thanks again and really enjoyed it.
David: Thanks so much for having me.
Jeff: Well, there you have it. Thanks again to David Marchick, the Dean of the Kogod School of Business. And thank you to you, dear listener. If this is your first time here at AI Curious, please subscribe. Please rate it five stars. Send to a friend. Thank you to listeners. Thank you also to Jasper Chua, our razor sharp producer, editor of the show. Thanks again and see you next time.