Spark of Ages
In every episode, we interview B2B Marketing leaders, executives, and innovators about their successes and challenges, asking them how they broke through and what spark in their careers took them to the next level.
Spark of Ages
How the US Air Force will use AI/Jason Hansberger, Tim Henderson - Great Power Competition, HAWs, Kill-Chain ~ Spark of Ages Ep 30
This episode explores how technology, specifically AI, is reshaping military strategy and national defense. Colonel Jason Hansberger and Author/Entrepreneur Tim Henderson discuss the implications of AI in warfare, the ethical considerations surrounding autonomous weapons, and the U.S. military's strategic innovations.
• Examining the significant U.S. defense budget and global military expenditure
• Understanding the new age of Great Power Competition
• Addressing the role of AI and innovation in military operations
• Discussing the ethical implications of autonomous capabilities
• Highlighting DAF Stanford AI Studio’s mission and objectives
• Exploring practical AI applications within current military contexts
• Assessing the future of military technology and decision-making processes
Colonel Jason Hansberger and Tim Henderson join us for an eye-opening discussion about the ever-evolving world of technology and defense. Colonel Hansberger, with his expertise in Air Force strategy, and Henderson, a former submarine officer and author, offer a unique lens on how artificial intelligence is transforming military operations. As we navigate a landscape marked by geopolitical challenges, such as North Korea's involvement in Ukraine and Donald Trump’s re-election, we examine America’s strategic role on the global stage and the ethical considerations that come with the use of AI and autonomous technologies in defense.
Our conversation takes a broader approach as we explore the global competition for AI supremacy. We analyze the economic and political ramifications of controlling semiconductor manufacturing and advanced computing technologies. This episode looks into the historical ties between military applications and technology, the controversies that arise when tech giants merge with defense projects, and how initiatives like the DAF Stanford AI Studio can bridge the gap between academia, the industry, and the military. We even consider the profound impact AI could have on decision-making processes in complex environments and the nuanced dynamics of autonomous warfare.
As we wrap up, we get personal with our guests, learning about their aspirations beyond their military careers. Tim shares his plans to publish novels inspired by military service, while Jason reflects on his life philosophy of seeking significance over success. Together, we address pressing recruitment challenges faced by the military and propose innovative solutions that could enhance outcomes.
Jason Hansberger: https://www.linkedin.com/in/jason-hansberger-b1b15374
Jason Hansberger is a leader in technology and strategy currently serving as the Director of the DAF-Stanford AI Studio and AMC/CC Special Advisor & Director, Technology Capability Development for the United States Air Force
Tim Henderson: https://www.linkedin.com/in/tim-henderson-53772116
Tim Henderson is a multifaceted professional with a background spanning military service, finance, and real estate development. He is a former submarine officer and nuclear engineer from the United States Naval Academy, where he served in various roles on a ballistic missile submarine, ultimately as a Missile Officer. For the past 18 years, he has been a multifamily developer and general contractor in Silicon Valley and is the President of Cypress Group. He is also the author of the novel "Nobody Wins Afraid of Losing".
Producer: Anand Shah & Sandeep Parikh
Technical Director & Sound Designer
Website: https://www.position2.com/podcast/
Rajiv Parikh: https://www.linkedin.com/in/rajivparikh/
Sandeep Parikh: https://www.instagram.com/sandeepparikh/
Email us with any feedback for the show: spark@postion2.com
Hello and welcome to the spark of ages podcast we have another special episode where we're talking about the topics of technology and AI and we're going to learn specifically about defense technology and how the U. S. Air Force plans to leverage U. S. innovation. For our national defense and I have some two really amazing guests today. So this is, you know, I get to meet these amazing people in the world and then I get to bring them to you. I have Colonel Jason Hansberger. So Jason is a leader in technology and strategy, and he's currently serving as a director of the. DAF Stanford AI studio and AM slash CC special advisor and director technology capability development for the United States Air Force. Prior to his current role, he participated in the SECDEF executive fellowship at Autodesk. He has also served as the deputy director at the HAF strategic execution group and as a commander of the first airlift squadron. So he flies planes and he does policy. Jason holds a Master of Arts in Regional Studies with a Southeast Asia concentration from the Naval Postgraduate School. And he also did his undergrad at the Air Force Academy. He is focused on applying state of the art solutions in AI and autonomy to solve Air Force problems. So we're going to really, this is going to be really interesting. We're going to learn about the special program that he's, he's driving. So Jason, you're here today, but your views here are your own. Yes.
Jason Hansberger:Yeah. Yeah. The, the important disclaimer is that all views expressed on this podcast are those of, of me and not of the Air Force. Okay. Or the gov or the US government should be construed as such. And any mention of any, you know, product or, or service or anything like that is also not an official endorsement of, of, from me or from the government.
Rajiv Parikh:And then we have my friend, Tim Henderson, and Tim is a multifaceted professional with a background spanning military service, finance, and real estate development. He is a former submarine officer and nuclear engineer from the U S Naval Academy, where he served in various roles on a ballistic missile submarine, ultimately as a Marine, as a missile officer. He holds an MBA from Harvard business school. He was in the same. Classes me or same year as me following his military service. Tim spent four years as an investment banker and venture capitalist in Boston for the past 18 years. He's been a multifamily developer and general contractor in Silicon Valley and president of the Cypress group. One of the reasons I wanted Tim here today is he's the author of an upcoming novel, nobody wins afraid of losing. It's a really interesting upcoming book that talks about. Where we're going in the future with some of these autonomous weapons and where big powers are going. So I thought he'd be great to have him there because he's done so much research and of course Jason here because he has he's living it part of part of what's driving the future. Some of the key takeaways you can expect from this episode. AI's role in transforming military operations, challenges in acquiring and developing AI talent for military applications, ethical and strategic considerations of autonomous weapon systems, those three topics and more. Gentlemen, welcome to the spark of ages.
Tim Henderson:Thanks Rajiv. It's great to be here. Delighted to be here. Thank you.
Rajiv Parikh:All right. I'm super excited to have you here. This is a very interesting topic. Uh, we have a lot going on in the world today. And, uh, let me just set up some context so that, uh, we can guide our discussion. Uh, some things that you guys probably already know. Um, the U. S. defense spending is, uh, was in 2023, 916 billion. It accounts for 40 percent of, uh, global military expenditures. It's greater than the next nine countries combined. However, we have some competitors and, uh, some strong competitors for every dollar, uh, China allocates to the military, the U. S. spends more, two dollars and 77 cents. However, maybe we can argue that China, because of where it is, gets more bang for the buck. That would be an interesting topic. U. S. military spending is 3. 1 times higher than China's estimated 296 billion dollars. So it's a really significant amount of money that we spend. However, we have troops all around the world. Uh, we have, uh, what? I think 12 aircraft carriers, uh, troops in over a hundred countries. And, um, really the U S is in many ways, the guarantee, uh, guarantor of, of stability and peace around the world. And so it's really important that America continues to stay ahead. And that's why this is such an interesting topic. So let me start with the first question, and this is for both of you. At the end of 2024, we saw some incredible developments globally. North Korea sending troops to fight for Russia in Russia for against Ukraine. We saw Assad's fall in Syria, Donald Trump's reelection victory, to name a few. So I'd like to set the stage for our listeners from your point of view in terms of where America's role is as we start 2025. And then we can get into thinking about what Where we're heading in terms of how we develop strategy for this environment, how we should develop technology for national defense. So Jason, you want to kick it off?
Jason Hansberger:Yeah, sure. I'll, I'll kick it off. So maybe, maybe the first is just on the money piece. It's, it's the public publicly available budget. I don't, I don't know if, uh, China's budget is actually in 200 billion or a tire. It's hard to know. So good metric. Yeah, I think, um, if I was to describe the, the global order, we have a phrase that we use called global global power competition or Great Power competition. And for me, a lot of people compare, Hey, maybe we're moving into a new Cold War. And I actually think it's not a great description. Um, and the way, the way I describe it is the, the Cold War was kind of the emergence of two competing models for, for governance and economics. And, you know, both of those systems came out of World War II and then grew up. In kind of these multipolar world where there wasn't a ton of interaction that, you know, there was clearly in the frontier interactions between the, you know, the Soviets and the US led Western order. But really they grew up both. And then there was kind of, the competition essentially was who's got the best order? And you know, they figured that out through the Cold War and in the 1990s. You know, the West kind of emerged as the victor of the Cold War, but this, this is a different competition. Uh, and I call this the, uh, great power competition for everything, everywhere, all at once. And the reason I call it that is there's, uh, it's some important differences. And the first is, is the nature of, or not the nature, but the kind of the character of this competition, which is, it's not that there was two separate models growing at the same time. It's that the, you know, the Chinese economy really grew in, in conjunction with the Western economy. And what we're looking at now is more of a messy divorce, uh, than, than feuding neighbors. And so as this competition plays out and as the heat does or doesn't rise, you have to figure out how those things get disentangled. And in addition to, you know, not only the entanglement of the economies that didn't exist in the cold war, there's also the technological and connect, you know, connection here, you know, today, uh, It's completely different than it was 40 years ago or 30 years ago, and so there's millions or 100, you know, hundreds of millions of interactions between Chinese and and Western elements and, you know, every single day and China and the United States then have to make a decision about whether or not those interactions generate negative outcomes that need a response. And so it's not that there's kind of a world here in a world there. It's that they're in this, they're occupying the same spaces, a lot of the same domains, and there has to be a lot more decisions about. What to react to, what not to react to, and so this is a much more intertwined and difficult situation to manage than what the Cold War was.
Rajiv Parikh:It's a really good point, right? I mean, we're not, uh, it's not like we, before, you're right, it was very much, if you go back to the 70s, 60s, 70s, 80s, right? It was very much two different worlds, two different economic systems. We were distanced from each other. Now we're very much intertwined.
Jason Hansberger:That's right. It is a different game and the interdependencies make a big difference. And, you know, there's arguments to be made that the interdependencies help maintain the peace. And so relationship changes every time something's disentangled, you know, because then the price for conflict decreases if it doesn't have as much of an effect on on. You know, your day to day economic activity, Tim,
Rajiv Parikh:your thoughts
Tim Henderson:1st of all on China. I echo. I agree with Jason's comment. They're hard hard to know what their actual budget is. The other thing is our expenditure per person per military. Person is much higher than there's, you know, the, the current compensation, the benefits package. And so I think would be more interesting to do an analysis adjusted by that factor, like a, like
Rajiv Parikh:a purchasing power parody version of it.
Tim Henderson:Yes.
Rajiv Parikh:Which I'm sure they've done.
Tim Henderson:Yeah. Uh, so on on the world order. I am a, what I would characterize as an optimistic new conservative, uh, but in, in spite of our record in, in recent wars, so you look at Iraq was, uh, a war predicated largely on the weapons of mass destruction, which did not exist. You know, Afghanistan, we invested over 20 years, we invested more than 2 trillion dollars and thousands of American lives and a month after we withdrew. The Taliban had, had, uh, entirely retaken the country. So, you know, that, that was a failure, uh, and to call it anything other than a failure, I think is, is counterproductive to learning how to do things better as a, as a country and as a military. Going forward, specifically on the topic of the military use of AI or the highly autonomous weapons, which I believe we're going to come to
Rajiv Parikh:really interesting point of view there. I mean, we're definitely going to get to that part of the discussion. In fact, that sets up my next question. So, um, if you think about this geopolitical landscape. But Jason brought it up that the notion of great power competition by moving away from more terrorist oriented threats to great power competition. So how do we leverage, you know, US military is, you know, how do we, how do we leverage AI and, and these autonomous, maybe semi autonomous, eventually autonomous to maintain a competitive edge, right? I mean, uh, access to the technology is much easier to attain than it used to be. Right. Before you're building, uh, weapons systems where you could be much further ahead. They were super expensive to develop. Nowadays with AI, it's a little less expensive. So Jason, maybe you could just touch on that.
Jason Hansberger:Okay. So when it comes to competing, uh, for artificial intelligence, I don't think the first place that we can beat artificial intelligence would be in the military application or military employment of artificial intelligence or autonomy. It's really, I think the competition exists, uh, primarily economically and politically for The ability to design and manufacture and field. Computing artificial intelligence technology, and I think you see this with like the semiconductor is a great example of trying to control the state of the art and compute through the control of who can and can't have access to the state of the art and computing that requires partnerships from. You know, extreme ultraviolet lithography machines down to, you know, the fabrication of the fab of it. And that's essentially the reason that we're competing there is because I used computing artificial intelligence as a, we can get into it later, but, you know, kind of a theory of an era defining technology. And if it's, you know, the era is defined by who best, you know, wheels, you know, Computing and artificial intelligence, then it makes sense to compete for those things, not only now, but into the future. And so we were thinking about these tariffs or things like that. It's about, uh, who can control the path of the future of those. And I, and then that then, of course, can lead to undesired escalations because the future finds its way to the present and then. You know, other as the heat of that competition grows, sometimes you lose agency and it's not, you know, you may not want an escalation, but when participating in international relations, like, sometimes you, you spark the, the, the escalation, you know, that, that, that goes beyond your control to, to control. And so there's, there's dangers and even in economic and political competitions for these things.
Rajiv Parikh:So it's kind of intertwined right that notion of it's not just a pure military and technological technological development. It's a access to You need a strong economic Capability to build these things to build these capabilities. And so that's part of the competition. So Tim your thoughts on that.
Tim Henderson:So the story of technology and the military is intertwined right Silicon Valley grew out of military technology largely, but in the past 30 years, there's been a divide, uh, that has grown a chasm in some sense between Silicon Valley and technology. So, 4, 500 AI researchers, for example, have signed an open letter saying that. They would like a ban on highly autonomous weapons, and they specifically don't want their projects to be used for military purposes. So, Google, for example, had Project Maven, uh, which was, Jason can probably speak to this better than I can, but my understanding was that it was a project that used artificial intelligence. Uh, sort through and categorize the huge volumes of data coming from Predator and other drones, Reaper, Predator drones, and to make some sense of that in, uh, in, in ways that humans were overwhelmed. Right? And when that came out, that caused a great, great deal of controversy. At, uh, Google and in Silicon Valley and even worldwide. Same thing with Project HoloLens at Microsoft, it was a set of augmented reality goggles that could be used by. By military people, soldiers, but, you know, it's still in the formative stages. It's still a beta technology at best, but, you know, the, you can envision in the future. Someone has augmented reality goggles. They're in a firefight and the AI is telling them. Not only networking them with with the cloud and other soldiers, but basically instructing them what to do. And that caused a great deal of controversy. So we have this kind of schizophrenia in our country over whether we want to use technology for the military or not. And I, and I'm hoping that we get into this because I see. The highly autonomous weapons is a very interesting subset of technology, because I think they, they are overwhelming as to my earlier thesis, they, they will be the sine qua non of military power, but the majority of people on the planet don't want this, you know, and I can go into great detail on that as well.
Rajiv Parikh:So that leads a little bit to what you're doing with, um. The DAF Stanford AI studio, right? I mean, part of, there's a, there's this, I think some of this is from the feeling that, you know, Tim talked about how Silicon Valley kind of grew out of, uh, military applications, the original semiconductor, the original, um, A microprocessor was developed or it or dramatically enhanced because of its military applications in missile technology. And so, um, some of what you're doing here is you're building bridges between, uh, universities, companies and the military. So maybe talk about the problem and what you're looking to actually solve.
Jason Hansberger:So kind of my very big takeaway macro view is that the humans define themselves in terms of the like air defining technology, like we define ourselves in terms of technology. I start this argument by it's not just a technology, it's the technology that's most important in building both economic and military advantage. And it has to do both. And so typically this technology would be very ubiquitous and useful across many, many, many, many things. And so the very first age of humans we think of and we call is the stone age. Right. And the reason we call it the stone age is because the manipulation of stone into tools are the useful things generated this durable advantage and ubiquitous advantage, you know, in, in both military and economic sense. And then that was displaced by. Bronze, which was displaced by iron, which then, you know, we call it the industrial revolution, but I really think it's the electrical revolution. It's really the ability to like generate and manipulate and distribute energy in order to magnify human activity. And then the next, you know, the 1950s, we kind of have this idea of. People are like, we're entering the nuclear age, and this was a common belief, but we didn't ever enter a nuclear age. And the reason we didn't is because it was too limited. It was, you know, as a weapon, it didn't displace all conventional military activity. As a energy source, it didn't displace all conventional energy things. And it had some limited use of medical, but it wasn't wide enough to be an air defining technology. Where I think we're at today now is, is the compute and the AI. Uh, or it's more important to application, which is artificial intelligence and autonomy. And so a couple years ago, as I was working at the headquarters Air Force, the, we weren't, to me, we weren't talking enough about what I was thinking of as the air defining technology. And it's not just for military application, it's for national security. It's really rooted in economic, you know, economic strength and growth and dynamism and the same with the military. And if, if this is the technology, then we really need to be At the place where it's a state of the art to ensure that we're advancing our national interest in the advancement of these things. And that they're being developed in a way that's responsible. And so, through, uh, my partner here, who is a PhD student, just recently defended his thesis. Um, you know, he, I'm telling this story, it's really deductive, but it's, it's, there's a lot of intuition, too. It's, you know, him and I, at the same time, had the same feeling, and we're connected via a mutual friend that said, You guys should probably work together, and you should do it here. Um, if what you want to do is have some kind of influence into where the research is going for AI and autonomy and compute. And so that's, you know, that's how we came to be here is, you know, I really very strongly believe this is the technology that will drive humanity. And I think the thing that displaces compute as the next era, I don't know what it's going to be. But I know that if we saw it today, it would appear as if it's magic.
Rajiv Parikh:That's right. It does feel like that, right? When you use these things, it does feel, you know, just when we play with ChatGBT, and I feed in my MRI results, you know, it seems to, it not only tells me how, it gives me a breakdown, it sends me an image, and it tells me what I should do next, right? And I showed it to my surgeon, and she's like, Wow, that's Well, at least you need me to, to do the actual work, so it's pretty, it's pretty magical. So part of, so to simplify some, I mean, to, to talk about what you're doing today, right? This whole program is about bringing the different groups together, right? And it's, it's, it's a, it's one of the many different programs. Right. The defense department runs and different, uh, you know, the military groups run, so maybe just talk about what you're trying to accomplish with it.
Jason Hansberger:The way that I see artificial intelligence employed today, kind of the existing technology is that someone is an expert in something. They're an expert coder. They're a doctor. They're a they're an animator, you know, and they also happen to have enough expertise to recognize how to employ an artificial intelligence tool. And then they take that tool and they apply it in a digital domain or a very highly structured physical domain, and they can do some pretty incredible increases in their productivity. But that breaks down really, really quickly when you try to apply an artificial intelligence tool in a highly complex domain. Thanks. Domain or something with high dimensionality of, you know, it's called out of distribution, but, you know, essentially like your if you're using your artificial intelligence, it will encounter an out of distribution event, which is an edge case, you know, and then it doesn't know what to do. And so when we bring it, if we want to bring artificial intelligence tools into the physical world in a way that's trustworthy, safe and reliable, then we need to advance the way that we build our algorithms. We need more modeling and simulation to reduce the same real gaps so that I can more accurately iterate and build. Build the models that I need, and I need a more efficient compute to bring this kind of really inference and optimization to kind of edge devices that that the military want to use. And so our research centers really focused on these 3 kind of research vectors, and that's that's what we're here to do is to advance in those 3 research vectors, because we think that's what we're going to need to feel the kind of autonomy. I mean, I don't when I don't know if I can get into this later, you know, but the kind of autonomy I'm envisioning is not a from the very start to the very end. You know that I don't think we have the technology now or even in a very near in the near future to fully autonomize the activity and a physically highly chaotic physical environment.
Rajiv Parikh:So right now, right now, what you're talking about is, yeah, we're using a lot of, we are using AI techniques, it's not like fully autonomous agents yet, because when you get into the real environment, um, you know, beyond the compute environment, when you're getting out into the real world, all kinds of things are happening that may not be well characterized and well understood.
Jason Hansberger:Yeah, I mean, just, just look at like self driving cars, how hard this is. Multiple trillion, you know, trillion dollar companies have tried and failed to employ self driving cars. And that's, we've structured the environment for those things to work. And we have millions and millions of miles of data. And it's still, you know, you're still getting a disengagement rate that, you know, that doesn't really make it fully autonomous. But they're doing that in a pretty friendly environment. Now imagine if all the pedestrians and everywhere that it's going and all the other cars are purposely trying to make that car crash. The disengagement rate would be through the roof. It really would be a usable technology. It assumes within it, you know, a cooperative environment, but if you're going to take autonomy into a combat environment, it's the opposite. Everything that your competitor enemy is doing is going to try to make it more complex, more chaotic and introduce edge cases that aren't going to make that autonomy useful. And so, you know, when we, when we think about. Autonomy. It's to me, the design principle is really about maximizing using human cognition, not displacing human activity. I think that's the, you know, that's just not the. It's not technologically feasible within the near future,
Rajiv Parikh:I see. So if you think about it, right, the closest thing we can see to a real life environment, it's what's happening in Ukraine, right? In the Ukraine, right? There's initially a set of drones that came out and caused asymmetric damage. To more expensive vehicles, tanks, missile launchers, et cetera. Then eventually, uh, Russia comes back and they are able to use radar jamming. They're able to use maybe counter drones. And so you're in this environment where whatever you create, there's going to be a counter to it, and it's a. Much different environment than driving down the freeway, even though it might seem like a combat environment.
Jason Hansberger:Yes Yeah, yeah, you know those you know the combative driving at least it's it's it's just aimed at their own self interest of getting home sooner Not at making sure that you crash on the way there
Rajiv Parikh:Yeah. So, so like, what are some of the practical, like you're saying that in today's world, there's ways that you can use AI and autonomous technology to just improve, improve the development of systems or improve the testing of systems that are, maybe you could just talk to some of those projects.
Jason Hansberger:Yeah, this is, so one of the projects that we're working on, um, is, so the way that the airport is really modern, it's using machine learning techniques to modernize the way that we go about test flight. And so right, right now we employ a lot of engineering best practices is what's called to design a testing profile to explore the flight envelope of an airplane. So a flight envelope an airplane is like how, what are the parameters in which an airplane can fly without? Departing flight crashing and the way that we've previously done. This is we said, well, the plane needs to be on a negative 20 degrees down. Dive pulling three G's with with the gear door open or something like that. And if the test pilot misses any of those parameters within kind of an arbitrarily defined. window, then they say that one doesn't count. They throw it out. And so what we've said is, well, there's probably a lot of useful data, even if you don't hit this, you know, heuristic, heuristically driven point. So let's just use that data. And so by applying this machine learning technique, we're now able to use all that data. And if that's the case, test pilots, it takes typically six attempts for every one testing point. And so now we're able to say, Hey, we're. We could do this in six times less because you're gonna hit your point every time because the tolerances aren't so tight. And so for six times faster and for six times less money, we can achieve the same kind of, uh, surety on understanding the parameters of flight. And so this is kind of a great example of of our approach, which is. To really go inside an Air Force organization or, you know, a problem holder and really help them to find the problem that then we can connect to an outside academic solution if it requires research or just a commercial off the shelf industry state of the art. To help them solve their problem, or maybe a combination of both.
Rajiv Parikh:So that's really cool. It's kind of so as you know, as you guys know, I'm into marketing. Um, and so, so what's your, so is that your go to market? So you have this, you know, you have your, you have problems that are happening. When you're developing the systems at the Air Force, and now you're saying, well, how do I connect research or how do I connect companies to this? Yeah. What's your go to market for that?
Jason Hansberger:Yeah, my go to market is two fold because I have to, you know, my primary customer is the Air Force. Uh, you know, and so, you know, my first, my first, you know, value proposition is I can help you define a problem. Um, and so a lot of times in the Air Force, Air Force and other places, right, is that I use this example, uh, to kind of understand where the problem defining in our approach is that one morning I wake up, I have four kids. So, you know, this makes sense to me. I wake up and my spouse says, or my wife says, Hey, we're out of baby formula. You need to go to the, to the store to get some more. And so, you know, from a military sense, like, okay, let's do a mission analysis. We had a logistics failure, but now it's irrelevant. We're gonna have to fix that in the future. But now I have this no fail mission, you know, get the formula or the baby dies. So I say, you know, I salute and I head out to the car to go get the formula. But when I get out to the car, it's snowing. And so you may say, okay, now we know the problem. The problem is that. I need to be able to get to the store no matter what. It doesn't matter the environmental conditions, so, but I'm going to solve this problem with a six wheeled vehicle and 36 inches of ground clearance and all wheel drive with a snow pile and some autonomy built into it. You know, and the problem though there is that I haven't really defined the problem. What I've done is done an environmental analysis. And if I go, and this is where we come in, right, is that we're going to bring the technical expertise to help you get one step further, which is where the real problem is a coefficient of friction that exists between your tires and the road or the lack thereof. And so there's a mechanical solution instead of tires or chains or a chemical solution in the terms of salt that we could apply to this problem and you could be on your way using the existing technology that you have. And so our kind of value in our position first is let us help, let us bring the technical expertise to you. Problem holder operate, you know, operator so we can really get to the point where we can either and this is the fastest is a novel application of existing means which you could very quickly field and, you know, spread to an enterprise or this is slower, but sometimes necessary. The development of a novel technology and that requires us to go do research.
Rajiv Parikh:So, and then, so what you're going to market in that, so you're saying part, part of the problem is in the past, you might have designed a vehicle that would, you know, be able to the vehicle for that one significant situation where you have to be 36 inches above the ground and be able to go through snow, right? And you might have designed this really special vehicle for that, but instead you're like, no, this is a, this is not going to happen every time it's going to happen sometimes maybe, but I'm going to design something that can be more adaptive. So your solution is to help people understand those problems and then bring out potential solutions from the, from different constituents.
Jason Hansberger:Yeah, I think, you know, one, it will help you solve the problem. And then two, we can connect you to, you know, the state of the art in both industries. Because part of that would be, well, I just didn't even know that someone sold chains. So I didn't know that there was salt that you could store on the road. I didn't know that, you know, someone had already developed a studded tire, you know, and if you had, if you had known that that product existed, it probably would have pulled you further into the problem because you would have said, no, no, no, no, wait, guys, we don't need 36 inch tires. For this one time, we could just put studded tires on the car and we'd be just fine. You know, so part of it is just bringing an awareness of, and I don't have it either. Right. And so, but it's, it's building a network and having geographical location gives you a better shot at knowing someone who might know of the solution.
Rajiv Parikh:And are you offering funds for those companies? Is this, is this part of that?
Jason Hansberger:Yeah, for the, so there's two is, you know, the, then we find a company, we can put a company under contract through, you know, different mechanisms. Yeah. Like federal acquisition regulations, but there's also some non federal acquisition regulations, you know, that we've applied to all the other transition authorities. It gets really arcane, really, you know, esoteric really fast, but we do have mechanisms that put people under contract to, you know, the problem holder will have the mechanism to like, put someone under contract to help them solve their problem. Um, like we just hosted Afworks is kind of our innovation acquisition. Uh, and we just did a two day workshop to help identify it. Uh, we're building an autonomy architecture for Air Force Special Operations Command's autonomous technology, and we had 40 plus vendors show up for two days and help us define problems, and then we're going to evaluate this architecture for, for utility. So that's, you know, that's one. And then the other is garnering what's called research and development funds, you know, to, to then bring to Stanford to, to conduct research here if needed. And our primary partners are the AeroAstro Electrical Engineering Computer Science Departments.
Rajiv Parikh:That's super cool. It's a way to get things done, get things done quickly and, and enable things to, to move faster. So, you know, when you talk about highly autonomous weapons, so Jason, this is maybe your own personal view, not, you know, the, the Air Force point of view, where do you think things are going?
Jason Hansberger:Yeah, so I mean, I'll say the, you know, the first place it has to go is to develop algorithms that can work with people. So, you know, the way. I need an autonomous tool that I can task. And so, you know, we can say at a very low level, what do I want it to do? I want it to, to fix a video problem or an audio problem, but in the physical environment, let's say I want it to fly from point A to point B and in a peacetime environment, that's a pretty. Pretty standard, very well structured, especially in the US air system, like this very well structured environment where you could you can build an autonomous solution to that. But as it does that, what I what I really needed to do is to if I give it a task to accomplish, I need to either accomplish the task given the constraints and restraints that they gave it up front. Or I need it to recognize when it can't do it with the level of surety that I've told it to. So I may say, Hey, if you are 90 percent sure that you can accomplish this task, then keep going and do it. But as soon as you realize that you're not at that threshold, I need you to raise your hand and say, I need a human intervention, or I need to let you know that I'm not going to accomplish this in the way that you thought. This is actually really hard. Uh, it's probably the only thing that's, it's one step less hard than saying, Go from takeoff to landing and do everything in between and, you know, never had needed any intervention once up below. That is to say, get this task done, but then recognize when you need your help. But the important thing is that this is how humans work, right? If you Rajeev, if you and I are working together, you're saying you're my boss and you're like, go get this done. And then I go do it until the point that I reached the limit of my agency. Or I reach the limit of my capability and then I put my hand up and I say, Hey, you need to give me more agency to go do this thing. If you want me to finish the job, here's where I'm at. You're good with me going forward. Or, you know, Hey, I can't figure this out. I just need another perspective or I'm out of bandwidth. I need you need to send me someone else to help me because I can't get this done on my own. But that's how humans work together. I think that's the kind of autonomy that we need to build so that that yeah. In uncertain environments, or, you know, as a potential for, you know, encountering edge cases or novel cases that an autonomy can augment a human versus, you know, if they're so fragile. Then they're just going to consume all the QIC cognition and trying to supervise it. So I had this, if anybody drives a Tesla, I'm driving down the, the freeway and I get frustrated at it for yelling at me for not looking at the road enough, but I'm like, this car is capable enough to safely execute, you know, the, the task at hand without me having to stare down the road because I can see what I need to see on the screen and I'm geeking out on the energy consumption, but I don't, so I don't, it's a policy problem there. Right. But then when I drive it all, you know, in a, in a, in a neighborhood with no, no lines and no curbs and, you know, trees randomly there. And then it's more work for me to supervise its physical execute execution of the driving task. It is for me just to drive it myself, right? And when you have those mismatches, it's not super useful. And so for us, we really, you know, as a, as a Air Force one, I think we, you know, this is my view, right? This is my, my, you know, thinking is our design principle needs to be the maximization of human cognition. Uh, and then you build out capability. From there, versus I want to take a human completely out of this thing and go do and let it go do its thing. And I think, you know, that that's, that's really, really difficult and more likely to generate either undesirable outcomes or mission failure.
Rajiv Parikh:That's really helpful to understand. Um, so. The Department of the Air Force or DAF Stanford AI Studio is working to ensure that the DAF has enough personnel with PhDs in AI and related fields. So considering the strategic importance of AI for national security, what unique opportunities does the DAF Stanford AI Studio offer that may attract talent to that may attract talent seeking to make a meaningful impact beyond commercial applications? How do you address the value proposition for AI experts?
Jason Hansberger:I think the, yeah, so maybe there's, there's two audiences for this one. You know, one is air force and space force, you know, airmen and guardians who are out and, you know, really want to be connected to getting into Stanford for a PhD is really, really, really, really hard. Oh yeah. Um, my son
Rajiv Parikh:is applying. It's hard.
Jason Hansberger:Uh, but you know, I think us having a presence here helps reduce some of the risks that a professor would consider when taking on a, an air force student. You know, so there's there's two hurdles to clear for an air force or guardian. That's for the air for the air force to approve for you to go study for a PhD. And then the even harder thing of getting an elite research institution to accept you for for admittance. And so, but I think that having the studio here because of those personal relationships and connections to some of the P. I. S. Who are, you know, who are going to it. The principal investigators or professors, sorry, you know, for the professor to say, this is a, you know, this is a person who I think can successfully navigate this very, very challenging PhD program in 3 years, because that's what the Air Force requires is a 3 year PhD, sometimes expandable to like 3, 3 and a half. But this, you know, what most people are doing in 5 years, we're, you know, we're asking our airmen to do in 3. And so by connecting to the studio as an airman now before you apply, you have an opportunity to do some research and show some of your research chops and make some of the connections, uh, with the people who make decisions about it, about admissions, or at least make recommendations for admissions. And so I think it definitely increases the probability of your acceptance, although that's about you and your ability and do you have what it takes from a, from a civilian side, you know, if you're. A Ph. D. student at Stanford or you're a professor at Stanford and you want an opportunity to have your research field that immediately on physical assets through our partnership with like the test center. This is a really, really exciting way for you to prove your research in the real world on a timeline. That would be really difficult to do otherwise, because not everybody has airplanes. That they can dedicate towards testing profiles. So I found that a lot of professors, you know, become excited at the prospect of being able to not only do applicable and useful research, but then validate it in the real world on a really accelerated timeline.
Rajiv Parikh:Well, that's super cool. So three years, that's very fast for a PhD. I love it. Um, and it's an exciting way to get your project funded and get it in an elite place. So it's really amazing. Um, so, Tim, I have this question for you. Uh, and I think you've been alluding to it and we want to really understand this, right? One of the reasons you wrote the book is to really think about, Okay. Help us think practically about where these things are going. So as AI systems become more prevalent in military decision making, um, you know, you think about, here's a question. It's, it's more in the minds of a lot of our listeners, right? We always keep thinking way out and we say, Oh, it's like Terminator or Skynet or something like that. And that kind of situation. So how do we make sure that humans remain the ultimate decision makers? How do we maintain accountability for actions taken by AI systems? And as a society, how do we ensure that the development and deployment of these weapons aligned with our values and international norms?
Tim Henderson:So, first of all, I'll say that I believe that the U. S. military currently is You look back through history, you look over geography right now, the U. S. military is probably the one of the most ethical militaries that's ever existed in the history of humanity right now. Right? So, so you, you saw it in Iraq and Afghanistan. You know, we, we. Uh, had significant, uh, civilian casualties, unfortunately, but we also took enormous pains and went through elaborate protocols to avoid that as much as possible. And, and you're also seeing that right now in our, what I will say is our reluctance to use highly autonomous weapons. Now, I think. And I, I feel quite strong about this. I think that eventually that reluctance will go away. Okay. It will go away entirely. Uh, highly autonomous weapons or HAWs as I call them, weapons, which will navigate over large geographies in long periods of time and kill with complete autonomy will become the sine qua non of great power conflict. Maybe not against. The Taliban, but a great power conflict against Russia, China, North Korea, Iran, uh, within the next 30 years, uh, and that will happen across all arenas. So, we're seeing that right now in the Ukraine, primarily in aerial warfare, but also in, uh, sea surface warfare and ground warfare, but it will also expand to space, cyber warfare, psychological. And I think what will happen is we will build these capabilities slowly, and then it could be, you know, a decade to 3 decades in the future, but the time of another 9 11 or another Pearl Harbor will come and, you know, sort of the nuanced perspective. these reservations will evaporate. Okay. That's my belief. And if you think back in history to World War II, for example, I think there's a very instructive historical fact, which is prior to Pearl Harbor, unrestricted submarine warfare, uh, you know, uh, submarines targeting merchant ships was illegal. And the U. S. was a signature towards not one, but two different, uh, treaties making that illegal. So, so you, you could not simply shoot an oiler or a, you know, a merchant ship, a Japanese merchant ship, uh, without first evacuating the crew to a position of safety. Um, and within six hours after Pearl Harbor, the uniformed officers in the Pentagon and in the, um, the war department, uh, said conduct unrestricted submarine warfare against Japan. And there's no evidence that they, that they consulted. Civilian authority over them. Um, and so it's a very interesting historical example of how when your back is against the wall, things change and they change very quickly. And I think another great example of that is what we're seeing right now in Ukraine. Ukraine has made enormous strides in autonomous warfare. Uh, in ways that, you know, prior to Putin invading, uh, you know, several years ago, that I don't think anyone would, would, would have predicted the effectiveness of. drone warfare in aerial, in surface, in sea surface, in ground warfare. Just, I think it was eight, nine days ago, basically the Ukrainians had a, what I would call an autonomous air ground assault platoon. And now these weapons were not completely autonomous. They were first person view. Weapons, right? But they, they attacked with no human, um, no humans attacking with them. Right? And so I can go through why I believe in this, you know, scenic when on hypothesis, and you look, look, I, I. Put out 30 years as the horizon. It could be 20 years. It could be 40 years. You know, it's hard to say with, with precision, right?
Rajiv Parikh:Yeah. So you think like, basically once, once you're, you're, you're attacked, a lot of these rules that you have kind of go out, you know, go out the window. Right. And, you know, uh, at the same time. Potentially, there are some things that we do, we don't use nuclear weapons, we don't use biological weapons, you know, for the most part, we don't use chemical weapons, um, I understand there's certain types of weapons that can blind people, uh, in battle that are not being used. I wonder if, if there is. You know, just because you have it, you have to use it or maybe it'll backfire. It'll boomerang on us. I don't know. Jason, if you have a, if you have a thought on this one, um, you guys have studied this way more than I have.
Jason Hansberger:Yeah, I think, uh, international relations theory, you know, would definitely point to norms changing under, you know, extraordinary pressure. And so the, maybe a way to think of it is that values or restraints, uh, that limit your effectiveness. Can become luxuries that you can't afford When you're contemplating your own existence, and so you will see rapidly evolving norms, rules, policies, even laws in proportion to the threat that exists.
Tim Henderson:So, Rajiv back 2 points 1 on the, the. Difference between nuclear, biological and chemical weapons and highly autonomous weapons. They're, they're quite different. Um, and there's a, there's a book called army of non by a center for new American security analyst, uh, Paul char that, uh, does a really nice job of analyzing this. It basically says, look, it'd be a big mistake. To look at the relative success that we've had at preventing proliferation of NBC weapons to autonomous weapons for the following reasons. 1, they have enormous military utility, which nuclear weapons don't really have in in that now, obviously, they're extremely destructive, but in that they also have enormous stigma. And so people have not used them since 1945, right? Okay. 2, they're not clearly defined, right? The UN still does after. 14 years of going back and forth. They still don't have a definition. Um, they lack a horrific nature, right? NBC weapons have pretty clearly defined a horrific nature for the autonomous weapons are often transparent. Like, if a predator drone Reaper drone launches a hellfire missile, you can't Uh, if you're on the ground or, you know, you can't tell whether an algorithm authorized that launch or whether a human did. Right. And 5, they're the ultimate dual use technology. Right. So they can be so autonomous vehicles. You know, Waymo, whatever, whatever company algorithm, right, gets developed, and eventually that algorithm finds its way to control tanks or armored personnel carriers or, or not, uh, tanks or other, uh, autonomous military vehicles through. You know, espionage through
Rajiv Parikh:somehow, somehow technology gets out, right? I mean, it's, it's always going to be, or it already might be developed the other way. And it's coming, coming, coming one way or the other way. We don't, we don't know for sure. Right. So you're right. I mean, so there's one is it's a super high cost where the other one it's where AI and autonomy is more incremental and has common usage and dual use usage. So it can seep in. And in a way we do have. Uh, wars in the sense that we have cyber wars and because they're quote, unquote, low cost, they're happening all the time, right? Um, there, there's all kinds of things that are happening that are low cost, quote, unquote, low costs. And so there's an argument. The argument is that, which I think you make really well as like, well, these sorts of weapons are, um, when they make sense, we'll use them. Because they'll help us win, right? Okay. AI and autonomy presents the ability to, uh, enhance various aspects of the kill chain. How can the military effectively leverage AI to improve the speed, accuracy, and effectiveness of this process while accounting for the human element?
Jason Hansberger:Um, yeah, so the. I don't really use the term kill chain too much because it's not a great term. I like to say, um, long range decision making in, uh, with, with imperfect information. I like that long
Rajiv Parikh:range decision making. You must have an acronym for that, like you had for everything all at once.
Jason Hansberger:I don't yet. I probably do need one. Um, but this, you know, the, the, the challenges faced in, in long range decision making under imperfect information aren't unique to, to a kill chain. I think, you know, there's a lot of research from like where to, where to pre position your Uber. Um, you know, so this is a very, very common, common problem faced by, by anybody trying to manage, uh, a lot of elements in a geographically diverse space collecting information, you know, because we have sensors and so do, so do any technology company. So how do you optimize these optimization problems and then seeking to, you know, make better decisions from it? Um, You know, and like how to preserve the, the human element in it, to me, it's, it's ensuring that one, that all, that, that all your operations are embedded with, uh, clearly well thought out values driven constraints and restraints, because even if we say like, oh, this, we're not there, but let's say there is this fully autonomous thing that you can send out on missions, you're nothing's really fully autonomous. Right because at some point it's it's being given a mission directive from a human um, and then within that mission directive you're saying These are the tasks you have to accomplish. These are the these are the decision parameters that must be met Um, and then there's somebody building those, you know, so there's there's lots of humanness within all this building, um until the Autonomy spawns itself and you know Runs its own country and things like that. Like these things reflect the humans who built them. Uh, and so I think, you know, making sure that that from the very start of the creation of any kind of autonomous system, you're embedding it with the kinds of values that are important to us, that that's where you have to start. And then based on policy and values and the situation you need to Put gates at decision making to ensure that there is a human there to understand it and to to make the make the decision.
Rajiv Parikh:So Tim, do you think that AI can AI will eventually be able to do its own mission directives? And if so, What should we do about it?
Tim Henderson:I agree with Jason in that, you know, uh, a human, you know, in my timeframe of 30, 40 years, a human still is sort of setting the parameters, right? Like, you know, this is the commander's intent. But my definition of a completely autonomous weapon is one, again, one that navigates over large geographies, long periods of time, kills with complete autonomy. You know, no one's partic, no one's picking a particular target that you know, I want you to kill this particular tank. It's okay. I want you to go out over this, you know, a thousand square kilometers over the next three days and kill any enemy vehicle or troop concentration. You see, in that time, that's what I. Uh, term a completely autonomous weapon, and I think they'll be very effective across those all those arenas, you know, and I'll tick through these reasons quickly. And if you have a question on 1 or 2 of them, let me know what 1 is this reaction time. Okay. Whether it's a. Uh, Navy phalanx, uh, uh, co collaborative combat aircraft, uh, a bullfrog anti drone machine gun. Two is they operate without fear and fatigue. Three is, they operate largely can be engineered to function largely a perviously to G-Force. Cold heat altitude, they're largely impervious to biological, chemical, and radiological weapons have enormous cost advantages. You strip out the armor plating, you strip out the human user interface. They can operate with a high degree of plausible deniability. Uh, what, so that's, you know, Russia's little green man or the Gary Powers flight that they can, uh, be a corporate autonomy could be incorporated into less than the lethal weapons, like, you know, a rubber bullet machine guns or tear gas dispensers. And I think that's. important in that, uh, you know, Tiananmen Square protest worked at least temporarily because the Chinese army was willing, you know, the tank, the famous picture of the tank man in Tiananmen Square that, you know, that tank commander didn't want to run over that civilian, right? A, uh, an autonomous version can be programmed to do that without You don't, so in other words,
Rajiv Parikh:right, but someone has to control it though. Someone has to say, no, no, no, but, but the has to give it that objective.
Tim Henderson:But, but what I'm saying is the dictator, the authoritarian power no longer needs to convince some significant subset of the police and army. To repress the population.
Rajiv Parikh:That's right. So like, as you're in a protest and you tell the police to go and do, you know, to harm the crowd and you hope the crowd, you hope, you hope the police think of their own families before they do it, but they don't, right. Some don't, some do, right. And you're saying that that possibility exists, whereas with an autonomous weapon, it may not have that decision making fail safe in a way. So, but I, I, I mean, I don't know about you, but I still feel like we're still going to have to hold back. The person accountable, the one who sent that system in,
Tim Henderson:well, of course, I mean, you would hope that, that in our international rules based world order that that would happen, but,
Rajiv Parikh:but the potential exists. So the fail safes hard there to the same extent.
Tim Henderson:Right? So, you know, it. In a nutshell, I don't know that we have time to fully explore this, but in a nutshell, I think that, again, highly autonomous weapons are very interesting subset of technology because the majority of the people on the planet do not want them to happen. But they will absolutely happen within the next 30 years. And so what does that mean about the relationship of humanity and technology? And so then that you can use that as kind of a crowbar to kind of pry open that by that box. And I think what you find is that humanity doesn't put a lot of. Or really any restrictions on developing technology technology gets developed and if it's controversial, sometimes it gets developed in secret and then we backfill, you know, the morality around it and in a world of nuclear weapons, engineered pandemics, climate change, do we have, you know, are we reaching a point where those technologies are operating so quickly. That we cannot fix the problems that they create, you know, I think humanity has 2 options. 1 is what I would call more tech. The other ones change human behavior. Right? And what more tech looks like is more technology. So, for example, nuclear weapons, we, we come up with Reagan's Star Wars, right? Climate change. We. Uh, eject atmospheric sulfate, uh, to reflect more sunlight. Um, uh, but again, in a world of hypersonic weapons of, you know, uh, torpedoes with nuclear warheads, uh, with in a, uh, age of potentially a point of no return on, on climate change, you know, which is a lot of people think is 2 degrees Celsius above pre industrial temperature levels. The, the more tech thing, You know, it doesn't work because you don't have enough time to deploy the solution.
Rajiv Parikh:Jason and Tim, welcome to the Spark Tank, where military strategists dissect the future of high tech warfare. This is where two seasoned officers, one former officer, one current officer, with deep experience in national security strategy and military innovation, are forced to leverage their expertise and engage in a high stakes debate. They're not just discussing the theory of AI and autonomy. They've been to the front lines of implementing these technologies in the military. They dissected the complexities of integrating AI into the military, confronted the ethical dilemmas of autonomous weapons and explored the talent landscape required to make it happen. Now it's time for fun. This is the ultimate military innovation showdown where only one strategic vision can prevail. So basically this game is two truths and a lie. I'm going to read you three statements. Two are true, one is a lie. You've got to pick out which one is the lie. I'm going to count down to, I'm going to do three, two, one. You're both going to show your fingers, whichever three, three, two, or one. So you can't cheat off each other. Not that you ever would do that. You're good, good, uh, uh, military men. So I'm, I'm assuming you'll be totally ethical with this. So I'm, so I'm going to read you three things. You're going to say which one the lie is. I hope you know your eighties history, uh, pop culture and autonomous vehicles is our subject. So I hope you guys are ready. Here's round one in the 1989 Batman film, the Batmobile feature featuring autonomous driving capabilities, allowing Batman to summon it remotely. Number two, in the TV series Knight Rider, Kit could transform into a submarine and navigate underwater autonomously. Number three, in 1993 film Demolition Man, accurately predicted video calling tablets and autonomous vehicles with voice control. Okay, so number one's Batman, you can summon it autonomously. Number two is Kit going underwater, and number three was the Demolition Man talking about all the stuff we do today. All right, three, two, one. All right, I have Tim at three and Jason at two. So the winner of this round is Jason. While Kit had many advanced features, underwater transformation was one of them. The other two are true depictions from their respective films.
Jason Hansberger:I just never remember Kit being near the water, so I had to go. That's right. I think that's a good catch. I think it's a lot more dust and dirt and donuts
Rajiv Parikh:than jumping in water. That's right. I don't remember it having that kind of, like, underwater cheaty cheaty bang. Anyway, um, alright, round two. Number one, the James Bond film Skyfall from 2012 showcased an autonomous Aston Martin DB5 that could drive itself to Bond's exact location using satellite tracking. Number two, the 1990 film Total Recall featured an AI powered taxi called Johnny Cab with a robotic driver that could engage in sarcastic banter. Number three in Jurassic Park, which is 1993, self-driving Ford Explorers guided visitors through the park on electrified tracks. Okay, you ready? 3, 2, 1. All right. The, so Jason is three. Tim is one. And the winner is Tim! So we have a tie.
Jason Hansberger:Rajiv, those Ford Explorers were not electrified.
Rajiv Parikh:You
Jason Hansberger:got
Rajiv Parikh:you there. No, they went through the park on electrified tracks.
Jason Hansberger:They did go through the park on tracks, but the tracks were not electrified. It was like the, you know, they let you All right.
Rajiv Parikh:So this is a protest. I'm going to have to talk to my, I'm going to have to talk to my committee on this one.
Tim Henderson:I'll accept, I'll accept it as a wrong. I will cede that to Jason. I think he's right about that. We'll
Rajiv Parikh:have an extra in case we come to a, to an impasse here. Okay. Number one was a lie. While Bond films often feature advanced car technology, Skyfall did not have a fully autonomous DB5. The other two are accurate depictions from their film. Other than the protest, uh, very detailed protest here. All right. I like that. I'm glad you you're, you're, you're so precise about it. Number three, the 2004 film I robot featured a shape shifting autonomous car that could transform into a submarine for underwater chases. Number two, the 2002 film Minority Report depicted autonomous cars running on vertical roads in a, in a ruthlessly efficient but terrifying traffic system. Number three, in the anime series Ghost in the Shell, AI powered tanks called Tachycomas could engage in philosophical debates while in combat. So number one, iRobot shapeshifting autonomous car could transform into a submarine for underwater chases. Number two, Minority Report autonomous cars running on vertical roads. Number three, Ghost in the Shell anime series, AI powered tanks called Tachycomas could have philosophical debates while in combat. So you're ready? Three. One. Oh, we got a split here. I like this. Okay. So Jason did a three Tim said, number one, the winner of this round is Tim. Wow. So he's watched a lot of old movies, but while I robot did feature an autonomous Audi RSQ, it couldn't transform to go underwater. The other two are accurate depictions from there. So I'm going to basically say, because one was under protest, we're going to do a round for, all right, all right, here we go. We have around four. Usually we only, we finish at three. We're going to do from four today, which is more fun. Cause then I, the, my writers get more credit for all the great work they do and they have more party, more party things you can do. All right. Number one in the TV series, Westworld autonomous horses with AI capabilities were used as a mode of transportation within the theme park. Number two. The 1983 film, Christine featured a sentient autonomous car that could regenerate itself and hunt down its owner's enemies. Number three, in back to the, in the back of the future trilogy. Doc Brown's DeLorean time machine had an AI assistant named WO Par that could pilot the car through different time periods. Three, two, one. All right. All three of you pick three
Jason Hansberger:because I mean, Dr. Marty had to drive the DeLorean. That's right. Yeah.
Rajiv Parikh:The DeLorean and back in the future didn't have any is this a name Whopper obviously because I pronounced it incorrectly. Basically know my age. It wasn't true. It did have a
Jason Hansberger:Mr.
Rajiv Parikh:Fusion. It did have a Mr. Fusion. That's right. That's right. All right. So I have you basically still tied. Uh, do you want to run five? This is it.
Jason Hansberger:I got a fifth one. Sure.
Rajiv Parikh:I got a fifth one. We're going for the win. Okay. Number one, the 2014 film. Big Hero 6 featured an inflatable healthcare robot named Baymax that could transform into a high speed autonomous flying vehicle. Number two in the animated series, the Jetsons. George Jetson's flying car could fold into a briefcase when not in use. Number three. In the Transformers franchise, the Autobot Bumblebee can disguise himself as a yellow Volkswagen Beetle and drive autonomously while communicating through radio clips. 3, 2, 1. Ooh, good. I actually could have an answer if one of you guys got it right! But you didn't. It was number 2.
Jason Hansberger:But did you say the Transformers movie or the Transformers cartoon?
Rajiv Parikh:Franchise.
Jason Hansberger:Yeah. I'm assuming it's a
Rajiv Parikh:movie.
Jason Hansberger:In the movie, he's the Camaro. GM sponsored that. Is he a
Rajiv Parikh:Camaro? No, but I thought he was a beetle.
Jason Hansberger:He is in the cartoon.
Rajiv Parikh:Okay, this is the cartoon. Jason will correct us. I'm
Jason Hansberger:such a nitpicker. I love it. I
Rajiv Parikh:love that you're a nitpicker.
Jason Hansberger:We're, we're ending in a
Rajiv Parikh:pseudo tie. Uh, while the Jetsons did not have, did have flying cars, they didn't fold into briefcases, uh, the other ones are apparently true. You both did a terrific job. You actually, basically, depending on how you count it, got two or three. That is very unusual for this game. So kudos to both of you. All right. I have a couple parting questions and, um, we're going to do this really fast. Uh, this is what we call our fast final four. Here we go. And I might just, just have one of you answer each. Your life path has led you to meet, interview, do business with people across industry, government, academia. So you've observed a lot of different cultures in action too. In your view, what makes the biggest difference in terms of organizational culture that leads to success?
Jason Hansberger:Oh, it's mutual trust. I think that's that's like mutual trust and empathy understanding each other's point of view. That's That's an easy one. What
Rajiv Parikh:did it work best for you?
Jason Hansberger:Uh, the The when I was at the first airlift squadron so, you know, they we fly the vice president and other cabinet secretaries and there's a Samford there's sam fox is how we describe the the culture and it's best kind of articulated in a single sentence, which is I do everything I do doing my best knowing that the person next to me is doing the same. Uh, that's a that Second element is very very empowering. Uh and allows kind of uh, a pure permission to be excellent Uh, you have to have that or you know, if you have pure permission to be cynical we've all been part of those places. Um, no one's great. But if you have pure permission to be excellent, we celebrate excellence for each other. Then, then you can accomplish some pretty amazing things. But it all starts in mutual trust.
Rajiv Parikh:I love that. All right, we're going to quote that after this. Okay, Tim, what's your personal moonshot?
Tim Henderson:I'd like to publish not only this novel, but, uh, I have actually a long list of other novels I'd like to get published as well.
Rajiv Parikh:All right. I enjoyed the novel. For me, it was, uh, it was like a combination Crichton and Clancy novel, so I learned a lot as I read. I had to read, I had to go, I had to look up a bunch of things. It was really fun. Here's a question for you, Jason. Do you have a favorite life motto that you come back to often and share with friends, either at work or in life?
Jason Hansberger:Yeah, I have this motto. Um, I shouldn't say I, it's we. My wife and I have a motto that, uh, you should seek significance over success. To me, success is inward defined and backwards looking. And if you're trying to be successful, then you can't say, I'm successful because, you know, next year I've done this. You know, I'll do this thing. It's I'm successful because I got to this place that I defined as a place I wanted to be. But lots of eyes to me in life, you're going to face choices where you can try to be successful or you can try to be significant. Um, and significance to me is, uh, is outward defined and forward looking. So is this podcast, was this a thing of significance? It really depends on, for, you know, Rajiv and Tim and, and, and listeners and, and what, what we said, if it's something useful to them and they take it forward and they bring it to other people. And so significance is defined by the people around us. And it's, it's, it's. Uh, defined by the future. And so when it comes, you know, to make decisions in life, if you're trying to lead a life of significance to do things of significance, uh, it'll push you in directions, I think, that are, that are pretty rewarding.
Rajiv Parikh:That's really awesome. That's a great line. Um, and I should just end right there, but I'm not going to, I'm going to keep going. What's the next thing you plan on starting, Tim?
Tim Henderson:I want to go back for a second to, uh, the novel that I wrote because I, I think it's an important, um, is why I wrote that novel. And two, there's two things compel me to write that novel. One is an enormous admiration. For everyone that serves in our military. So I was a submarine officer. My father was a career infantry officer, company commander in Vietnam. My, my son is a Ranger qualified first Lieutenant in the 82nd Airborne. So I, you know, I, I. So I would love for my novel to be compared even remotely to the novel by Anton Meyer, Once an Eagle, and it's a, it's a classic, and I strongly recommend that everyone read it in addition to the novels. Uh, I am the, uh, in charge of programs for the Military Officers Association of America, the Silicon Valley chapter. And I'm, uh, hopefully going to recruit Jason to come speak to our group. Uh, but in addition to, you know, uh, a long list of speakers I'd like to attract, I'd like to, I, I, I, I wrote a, A paper, uh, with one of my Naval Academy classmates called in which we advocated a national defense auxiliary, and we'd submitted it to the joint forces quarterly. Um, and we still, it didn't get picked up by JFQ, but, um, but it got very close and we're hoping to tweak it and get it published somewhere. And the idea is, is basically that. Uh, we have far many more veterans than we actually do active duty people. And you look at, uh, countries like Ukraine and Israel and a large part of their success and their defense against. you know, vastly superior enemies is the way that they harness the whole of their nation to, to, uh, contribute to their military, uh, defense. And so I think there's a lot of things that veterans can do to help active duty units, uh, everything from technology acquisition to, uh, recruiting, you know, the, or. Services have fallen short of recruiting goals, uh, most years in the past 5 years. Um, and so I think it could be an incredible force multiplier for basically at no cost. And so I'd like to try to. Be a small part of making that happen.
Rajiv Parikh:All right. Great sense of purpose. I'd say great sense of purpose and a great level of service. And I appreciate both of you for being here and engaging in this discussion and debate. And I think it's. Super thought provoking and helpful, uh, where you, you know, the intention is to protect Americans and protect, uh, uh, those around the world that have, want to live in a rules based world. And, um, I appreciate both of you for being here and talking about your points of view and programs that you're creating and how you're trying to make the world a better place, so thank you so much for coming today.
Jason Hansberger:Thank you, Rajiv. It's been a pleasure.
Tim Henderson:Yeah, thanks, Rajiv. And, and Jason, uh, you as well. I've learned a lot.