News
The DeepSeek Psyop Explained: Nicolas Chaillan
Comments
Link successfully copied
By Jan Jekielek
2/11/2025Updated: 2/12/2025

[RUSH TRANSCRIPT BELOW] The Chinese AI app DeepSeek recently became the most downloaded iPhone app in the United States and caused U.S. tech stocks to plummet. President Donald Trump described it as a “wake-up” call for American companies.

So what’s really going on? Is DeepSeek as powerful as people think? Or is there a bigger story here?

In this episode, we sit down with AI expert Nicolas Chaillan, former chief software officer for the U.S. Air Force and now founder of the generative AI company Ask Sage.

Views expressed in this video are opinions of the host and the guest, and do not necessarily reflect the views of The Epoch Times.

RUSH TRANSCRIPT

Jan Jekielek: Nicolas Chaillan, such a pleasure to have you back on American Thought Leaders.

Nicolas Chaillan: Good to see you.

Mr. Jekielek: DeepSeek has been described by some as a second Sputnik moment for America. What do you think?

Mr. Chaillan: I don’t think that’s that easy, right? When you look at what happened, effectively, you see China, particularly the CCP, manipulating markets. It’s interesting to see how quickly U.S. companies and also investors in different markets reacted to this news, fake news, really, which was that China has created a better model with very small investment in GPUs using the older models of GPUs of NVIDIA. None of it is true.

When you start digging into what happened, you find that that company is really led by investors that have been investing around crypto for a while. They had access to about 50,000 of the latest NVIDIA H100 chips. The models are not that good. Not only is tremendous bias baked into the models, coming directly from CCP propaganda, but you also see something pretty insane, which is they ingested an immense amount of data coming from OpenAI and other companies, which everybody does.

But at the end of the day, what you also see is these models being trained to pass the benchmarks that are used to decide whether or not they are better. And quite honestly, when you use them in real life with real use cases that we do here, we find pretty quickly they are not up to par and quite behind what you see with the latest models from OpenAI, Google, and even Meta. It’s important to realize that they’re leading in many fields and they know how to manipulate opinions and markets. They shorted the stocks. They made hundreds of billions of money by doing this announcement. They are smarter than us and play a better game to manipulate what’s going on in the United States and even in Europe. But this is not a Sputnik moment when it comes to AI. We need to be at the top of our game and we need to make sure the government is adopting the best US companies’ capabilities. But it does not mean that China is leading right now when it comes to these models. But it still is something that we need to pay attention to, because at the end of the day, they might be winning.

Mr. Jekielek: Nick, I’m going to get you to unpack a few things for me before I go further. For example, you said that they’re being trained, this deep-seek AI is being trained against the benchmarks. Before we talk about that, tell me exactly what it means to train an AI for those of us that are uninitiated.

Mr. Chaillan: Most of the time, the way these large language models are trained is literally ingesting massive amounts of data to pretty much capture everything that exists. In fact, they’re running out of money. So they are creating new data by using large language models to actually create new content because we’re running out of data to ingest. It’s not surprising that you see all these lawsuits, not only with OpenAI, but also other companies.

Now, you’re seeing DeepSeek also ingesting effectively data directly from OpenAI using their APIs, their technology to generate responses, and including, of course, their documentation and all these things. That’s pretty common. That’s the way these models are trained. And it’s very difficult to then change the way these models are going to behave because they get this bias from the content ingested based on the volume of data ingested.

It’s very difficult for DeepSeek to then remove facts that are automatically ingested via this massive amount of data. That’s why you see the models initially answering the questions about President Xi and all the information that China is trying to suppress. But then they have safety mechanisms to look at the response and override the answer. That’s how they start hiding all the moments in history that China doesn’t want us to know about.

Mr. Jekielek: Nick, on this point, it seems to me like such an odd thing that the model will actually show you the answer for a split second and then basically say, sorry, I can’t show that to you, right? I was just looking at a recent tweet from a user who asked hundreds of times about Tiananmen Square, for example, and kept getting the same answer. And it sort of, it appeared to quote unquote, trigger the AI into saying, look, enough, don’t ask me this question anymore. But it almost seems intentional that they’re sure, because they wouldn’t absolutely have to show you that there was an answer and then hide it. Do you make sense of this?

Mr. Chaillan: The way these special models called the reasoning models work is you see the reasoning first. That used not to be the case, right? That’s very recent with the new O1 models coming from OpenAI. These were the first models that resonate first throughout a very detailed process of thinking. And the longer he thinks, the longer the response gets to become better and cost more money to generate, of course. They’re showing you the thinking, which many models don’t show you. They only show you the response.

By showing the thinking, they have no way to hide what the model is doing behind the scene. When the response is ready, it triggers their safety mechanisms to then remove the answer. But it’s too late because, of course, the thinking and the reasoning of the model is there for everyone to see. You can see all those insights being shared to the user. That’s kind of the downside of reasoning models. They have no mechanisms to hide that.

Mr. Jekielek: Explain to me the relationship between the number of chips and the success of one of these models.

Mr. Chaillan: We’re talking about a massive amount of data and a very complex learning and machine learning algorithm that takes an immense amount of GPU and electricity to generate. The bigger the chips, the faster you can train models, the faster you train models, the more quickly you can release the next generation models. You just saw last week OpenAI with O3 Mini released their latest models, which is far superior, by the way, to any other model. These models took months to update and generate.

But if you had older chips and less chips, it would actually take even longer. It’s just a matter of velocity and speed to deliver the latest capabilities faster. Effectively, these companies are investing in infrastructure that enables them to release their next generation models faster and faster. It’s a never ending game because the GPUs keep getting better and more efficient until you have to buy new hardware. All that money spent with NVIDIA chips is then used to train the models.

Mr. Jekielek: Tell me very quickly, how is it that you know that they didn’t use this low amount of computing power that they claim?

Mr. Chaillan: It’s pretty obvious when you look at the research that first of all, you know, five, six million dollar investment is just plain ridiculous. But really, at the end of the day, that’s what the CCP does, right? They lie about every number on the planet. And so, you know, when you see all that access to these beefier chips and when you look at the background and the knowledge of the people behind DeepSeek, it’s pretty obvious to anyone that they know what they’re doing and they have access to a pretty much unlimited amount of compute and also funding. The numbers just don’t lie, but the CCP does.

Mr. Jekielek: You said that they trained DeepSeek on the benchmarks themselves to kind of give it the appearance of being more sophisticated than it actually was. Can you expand on that?

Mr. Chaillan: Yes, everybody is doing that now that the benchmarks are public, right? It’s pretty easy to pass a test when you know what the test is about. And so they’re going to spend extra time and extra focus on trying to get as good a result on these questions as possible. And so it’s easy to cheat a test. It’s less likely to make it happen when you have real life scenarios. That’s the way we test models and we don’t make those tests public because the minute you do, you lose all that advantage of surprise.

Mr. Jekielek: The bottom line is, you know, they launched this thing and kind of wowed the world, wowed the markets, wowed users. But then when you look under the hood, it’s really not nearly what it appeared.

Mr. Chaillan: Don’t get me wrong. They still had very good outcomes when it came to the training methods, and they did come up with ways to be more efficient and save money. And there’s definitely value there. Of course, the fact that it’s the CCP behind it and biased and, you know, with a bunch of made up answers, that kind of kills the entire value, in my opinion, because how are you going to trust it? Even in math, even in research, they could train it in a way that if you ask a question in English, you get answers that are not as good as if you were to ask it in Chinese, for example. That would be easy to do.

Again, I would never trust it. That defeats the point of using it. The numbers and the benchmarks and the amount of money that they spent to train it is completely made up. I don’t believe it for one second. It might certainly not be as much money as we spent on OpenAI 03 or 01, but it’s still way more than they’re claiming, a thousand times more.

Mr. Jekielek: When you left the Air Force, when you left being the software chief over there back in 2021, I think around the time when we first spoke online, you said you believe that the U.S. has really lost the war or is on the cusp of losing the war in AI. But you sound more optimistic today.

Mr. Chaillan: I always said that the U.S. was doing very well, but the DOD was losing. That’s something that’s a little detail that’s being lost in translation with my French accent often. I think that’s the most important fact. At an incredible velocity and speed, the CCP is adopting their latest Baidu GPT and DeepSeek models and even the new Alibaba models into their military networks across classification levels on very complex weapon systems. The models are less good and less capable than the US models.

Unfortunately, the US has a massive wall and lack of adoption of the best of breed US capabilities into the Department of Defense. The US is leading compared to China when it comes to companies, but when it comes to defense, which is quite honestly the most important use of AI we could think about in 2025 and beyond, you see DOD being at least five years behind China. It’s compounded by the fact that these models augment and empower the velocity of people up to 50 times, so one person turning into 50 people. Let’s face it, the CCP already has more people.

Now, they turn these people to 50 times more thanks to being augmented and empowered by AI. It’s almost impossible to compete, particularly if you don’t have access to AI capabilities. I’m very concerned about the amount of money spent, particularly during the Biden administration, for four years on AI ethics. There was also a massive amount of money wasted by research teams building their own DOD capabilities like NIPRGPT that are years behind what you see private companies do, and really push us behind China even further.

Mr. Jekielek: I’m wondering if you could actually explain this. I really like how you talk about this idea of AI actually increasing the velocity of the human being. Just in a more layperson example, this is far beyond using a chatbot.

Mr. Chaillan: Yes, it’s a life changer for me. It became my chief of staff, my accountant, my lawyer. I had the best lawyers in the United States. Even when I send them some contracts to review, they have very little comments compared to what AI gave me. It’s pretty mind-boggling to see what you can do when you go to the next more advanced AI models and when you know how to use them. You see a lot of citizens giving up, and frustrated that they can’t get to the right outcomes.

I tell people, look, blame yourself, and start using it right. We have a lot of free videos on our website to help people get started with GenAI. It’s a life changer, but it needs some learning. And honestly, it is not rocket science. On average, it’s gonna take about three months for someone to get the hang of it. But it’s super powerful.

My company was created by GenAI. My logo, my website, my application, and 90% of the code is generated by GenAI. We estimated I would have needed 35 developers full time to build what we built with two people in eight months. The entire company, including the logo and everything we do in marketing and LinkedIn and even Google search and advertising is 100% driven and designed by GenAI, giving us feedback, options, ideas, and a course of action.

It’s your chief of staff on steroids. It’s a way to save an immense amount of time when you know how to use it for pretty much every non-blue-collar job. It’s funny because you kept hearing for years that technology was getting rid of all the blue-collar jobs. And what you see happening is the exact opposite. The jobs that have the highest likelihood to be disrupted by AI are actually non-blue collar jobs, particularly coders and accountants and lawyers. If someone had told you this 10 years ago people would not have believed it and you know people who went into coding have a safe, secure job for 20 years or 40 years. What you see is a very high likelihood that most basic coding jobs will be replaced by AI.

Mr. Jekielek: That’s absolutely remarkable. And so the other thing that we talked offline that you said was you actually get the AI to give you a range of options when you’re querying it. Like you don’t ask it, okay, give me the answer, right? Because my big concern is you don’t want the AI to tell you what to do. That sounds like a really bad idea.

Mr. Chaillan: Yes, and you also bring your bias into the questions. And that’s probably the number one issue and mistake we see people make. We call that prompt engineering. It’s really learning how to prompt and question things and words matter. Instead of saying can you do X, you could say do X. It is simple, saying you want five courses of action for a problem instead of one solution. You’re going to get way more well-defined answers, but also more options to navigate.

Sometimes I say, give me a mix between number two and three, and how would you do that? Then you can also become the driver. You become the orchestrator of the AI and you guide it. You still need to have your brain and your desired outcome at the center of the puzzle here, but you still need to be able to navigate it to get you options that maybe you didn’t think about. If you show up with bias and already pre-made answers and you’re guiding the bot to go to those solutions, then you’re limiting your options by limiting the choice of answer. So you want to be open to pushing outside of your comfort zone.

Mr. Jekielek: Give me a few examples of use cases that work for you. One is, which is fascinating to me that you just mentioned is looking for holes in contracts, making sure your contracts are rock solid. One that I use a lot for research is perplexity.ai. I know Ask Sage has a similar capability, which I’m in the process of exploring right now further. What are the other use cases to give people an idea?

Mr. Chaillan: Yes, there’s so many options, right? Number one is, and it really depends on what you do on a day-to-day basis. You know, we have coders using it to do the entire coding, research, testing, cybersecurity, compliance. You know, all these compliance paperwork we’ve got to fill, not only in defense, but also, you know, with climate change requirements and all these different, you know, paper reports we’ve got to generate. That’s a great way to get it done much faster.

You talk about summarization, extraction of insights, contracts. I mean, it’s not just writing. We use it to write all our proposals and respond to bids. You know, my company responds to a lot of government bids. We went from five days on average down to 32 minutes to respond to a government bid with 98 percent accuracy on the first try. You know, you take contracts, it’s on both sides. It’s writing your own contracts, but it’s also reviewing contracts that you receive from third parties.

For me, it’s been a great way to ask it to see what I should pay attention to and what pitfalls to avoid and what I should be concerned about and which clauses should be reviewed more deeply by humans. And so it’s a way also to navigate the noise and kind of save a whole bunch of time, you know, instead of manually reviewing something. Just last week, I was reviewing a contract I signed and I forgot, you know, what the terms were when it came to the termination of the contract.

Is it one year? What is it? I just attach the contract to assay and say, tell me what’s the deal with the determination clause. He gave me a you know two minutes a quick summary of the contract termination clause. I saved probably 20 to 40 minutes. Figuring this out took me one minute. Every aspect of my life as CEO, whether it’s marketing, my LinkedIn posts, I train it on all my previous LinkedIn posts and all my articles. Now, he speaks with my tone, and almost has a French accent. It’s all about how you use it.

Unfortunately, people don’t have access to all the tools, particularly that we built in our product to customize the behavior and the tone. Many people using something like ChatGPT sound like robots. You can tell it was written by GenAI. I welcome anyone to look at my posts and compare my LinkedIn posts two years ago to my posts yesterday and tell me that you can tell if they were written by GenAI and I bet you you cannot.

Mr. Jekielek: Let’s jump back to DeepSeek. Bottom line is you’re saying that the US models like OpenAI and perhaps others are actually superior. Do I have that right?

Mr. Chaillan: Yes, we tested in real life use cases and its benchmarks are really mostly useless. They do a decent job to at least differentiate the junk from the good models, but they are not very good when it comes to details of real life scenarios. And there’s nothing better in life than real life use cases, right? And doing a deep dive with real data and real research. And so when we test models, we have 150 models on assays now, which is pretty insane, both commercial models like OpenAI, Google, Anthropic, Meta, all the way to pure open source models.

We put DeepSeek on Ask Sage because the Department of Defense and the intelligence community wanted to research this securely. You have to be close to your enemy to know what’s going on. I saw so many people freaking out that we’re putting DeepSeek on Ask Sage. Number one, we did it securely. It’s self-hosted and siloed and sandboxed. But if you don’t know what your enemy is doing, how are you going to be able to take action? It’s just foolish to think we should put our hand in the sand and hope for the best.

So that’s the first thing we did. We gave access to researchers with very clear guidelines on how to use it and how not to use it. And that’s been a great tool to be able to try to understand how they built it. And to see the bias. Honestly, the bias is actually super revealing because you can then look into what they try to suppress, which gives you a hint that, you know, about what they care about. Right. And so when you find what they’re hiding, it’s a great way to keep digging to see what else they’re trying to hide. So it’s actually a super interesting tool for intelligence.

But again, the way we build this entire stack is to be secure from the ground up and completely air-gapped and siloed. So there is really no cyber risk. The responses are biased, obviously. Each model is going to have bias. But we put it through a real life use case in coding, cybersecurity, compliance, data analysis. It’s doing fairly well in many questions. It’s not as good as coding as we see in other models like OpenAI’s 03, or Anthropic’s Claude 3.5 Sonnet.

There’s so many options today, which by the way is why tools like Ask Sage are essential, not only for the government, but also for companies. So you’re not getting locked into one technical stack. We don’t know which company is going to be leading tomorrow. And you don’t want to be locked into OpenAI or anybody else. And my job is to give customers diversity of options so they can try things out and see what sticks. And you want to have those options and you want to have them quickly.

More importantly, you want to be able to train your data once and use any model. So we built this abstraction layer so that you can ingest your data with all your enterprise business data, all your knowledge base decoupled from the model. So that way, if there’s a new model tomorrow that comes out from the same company you’re already using, or it’s a completely new company, you know, coming up with a disruptive way to do things, and you want to use it right away, there’s no change to make to any of the work you’ve built.

That’s a game changer for companies. And that’s why we see such a massive success. Because all our competitors are only pushing their own models. Microsoft, OpenAI, and Anthropic, all these companies are pushing their own models, with their own bias. We are agnostic. When you’re a business, or even more when you’re in the government, you don’t want to be locked into anybody.

Mr. Jekielek: Just for the very basic user, what is the difference between someone loading DeepSeek on their phone or on their computer directly and loading it through Ask Sage?

Mr. Chaillan: Yes, so that huge difference, right? The hosted DeepSeek app is using Chinese hosting and servers. So all your data and everything you do, including your keystrokes and everything you type, which means they could even potentially look at what else you’re doing on your phone or your device. So be very careful there. I would never use this ever if it is then sent to China and the CCP because they have a mandate to share with the government.

That’s very different from hosting the open source version on your device or on Ask Sage like we did. That’s completely controlled and hosted in the US and no data is flying back to the CCP. So you’re still going to get the same bias and the same made up answers for some of the questions. Although what we found is that the open source models have less bias, funny enough, than deep seek models for example the tan yemen square answer on some models is not blocked for the open source DeepSeek version.

But yes, if you go on the hosted app and you download DeepSeek on your phone it’s going to be blocked. It would seem to be a safety that they added on top of the model, not into the model. Again, when you host the model on your device, whether it’s your laptop, you can download it and host it, but not everybody knows how to do that. That’s why Mr. and Mrs. Everybody are going to go and use the app. And that’s where damage is done, because all your data flows to China. That’s why companies like us exist. We host for you, so you can just use us and query the model instead of hosting it yourself.

Mr. Jekielek: How does DeepSeek compare with TikTok in terms of a threat?

Mr. Chaillan: Number one, DeepSeek has clearly used data coming from TikTok to train itself. So it’s interesting how that access to data is paying off. As you know, the next weapons, particularly AI weapons, are all driven by data. And so the more data you have, the more powerful weapons can become. And, you know, it’s foolish to give the CCP access to all that TikTok data. And so you see some of the results here with DeepSeek having access to all that data. It’s very similar, right, to the risk and the threats of TikTok.

I was disappointed when I saw that Americans were downloading Chinese apps the minute TikTok got banned and seemed not to understand what we’re facing. There is a lack of education when it comes to the threats and the risk that the CCP is bringing not just to your data, but also your family and the nation as a whole. Most people think, I have nothing to hide and I don’t care if China sees my stuff, but it goes beyond that. People don’t comprehend how China is able to use that data and to then better understand the population to create political weapons of misinformation and manipulate markets.

A perfect example is the DeepSeek announcement, right? The way they were able to disrupt half a trillion dollars of market share, market cap in a day and shorting the stock and making a whole bunch of money at the same time with really nothing to back it up, just noise and completely manipulating the market. That’s scary, right?

Because if they get good enough to manipulate the entire market in the United States and Europe, that’s going to become a real threat to the economy of the nations. And they are doing that by understanding how to communicate and how to position messaging to Americans. And there’s no better way than seeing how people react to videos and information on TikTok and other apps, because they can see what works and what doesn’t work and how people respond to those videos and data points. And that’s then used to do these kinds of campaigns, you know, that can completely destroy the economy of the United States.

Mr. Jekielek: Nick, one thing that just struck me, it was so weird somehow that all these users, when TikTok was going down, were jumping to an overtly communist party app, named that way even. And it just struck me, was this not just kind of a demonstration by the CCP of being able to subtly influence through TikTok, through messaging, through finding the people that are most susceptible to be influenced this way to go to that specific app. It just struck me that isn’t this itself the case study of how people can be manipulated.

I saw videos of young people saying, it’s amazing people have housing in China. Almost everybody has it. We’ve been lied to all along. It’s such a horrible country. You know, just kind of wild stuff on the face of it. But it just seems so unlikely and bizarre that they would be running to this RedNote app. What are your thoughts?

Mr. Chaillan: We started calling for the ban of TikTok in 2018. That’s what happens after six years of brainwashing. You know, you see the effect right there and you’re 100% right. A lot of people are now completely brainwashed, not just on the CCP in China, but also on dozens of other subjects. You look at Palestine and what’s going on and how much more hateful messages. You can find anti-Semitic messages on TikTok.

That’s not an accident, that’s designed into the system. It has always shocked me that people would use something that is banned in China, but yet is allowed elsewhere. China is smart enough to know the damage it would cause to their kids. They don’t let people in their own nation use it. But yes, we’re stupid enough to let that happen.

Mr. Jekielek: DeepSeek is obviously not taking in as much information. It’s not ingested through a camera. It’s just ingesting knowledge or what sorts of questions people have. But basically,you’ve been suggesting that it’s the same model of data acquisition and ability to influence through the types of answers it gives. Can you flesh that out for me a little bit?

Mr. Chaillan: Number one, in the terms and conditions they mentioned, they can log your keystrokes. So everything you’re typing, not just what you’re typing into the app, but they could potentially log everything you’re typing. So that could be an immense amount of information. Particularly when you do speech-to-text, you’re still having those keystrokes sent to the operating system. So they still get the text typed into the boxes, the inputs of your apps.

Even if you use voice, that could still get the text coming out of the speech-to-text app. So you’re talking an immense amount of data that you would get access to. You’re right, you also get to control what right and wrong is. You can make your own story. As you know, whoever controls history controls the world.

Mr. Jekielek: Are the responses custom made for people? In general, are AI models learning about who you are and customizing their responses to you specifically at this point? Or are the answers more in general? Does DeepSeek have this capability from what you can tell thus far through your tests?

Mr. Chaillan: No model already learns on the fly. The way this works when we activate this feature either on OpenAI or another platform like Ask Sage we have it as well, you can activate learning, meaning when you’re searching something and it tells that the answer is relevant to the user, we can log it to keep that insight into a database. And that database is then used to augment the knowledge of the model. It’s never into the model, it’s on top of the model.

Now, some companies offer a free service and in exchange, you’re giving away the rights to your data to the company. And then they can use the training data to ingest it into the future release of the model, but not the current model, right? It’s only used for the next training cycle. And so we don’t do that at Ask Sage because number one, we never want to train customer data. We have very sensitive duty and, you know, U.S. government data and banking and health care customers. We never train models with customer data.

Most do, but we don’t. It’s never in the current cycle. It takes months to train a new model, so you’re never going to magically see in your data that insight about how the model can get you better answers. And we use it often to give context to the bot when it comes to who you are and what you’re working on. It’s always great not to have to re-explain who you are and what you’re doing, so you get better, faster answers.

But for us at Ask Sage, we activate it on demand. And you can create what we call data sets, which is like a bucket of data where you can have different topics like folders. And you can ingest files and whatever data you want that we’re just beginning to scratch the surface of. It’s going to change a lot of things. For me, in the last two years, I was not an AI guy whatsoever. I was security, software, and cloud. I created 14 companies in 25 years.

I was the chief software officer of the Air Force and Space Force and chief architect at DHS. After I moved here from France and created my first company when I was 15. I had never seen a technology, including the smartphone or cloud, that impacted on a day-to-day basis my life as much as this. The issue is people often started to play with this and gave up after a couple of weeks or just didn’t grasp the opportunity and the scale of this and what you can do with this.

Honestly, they need to double down. I’m very much worried when I hear people like Sam Altman or Mark Zuckerberg talk about the impact on jobs and that you need universal income. They’re starting to bring this up now. They know that within five years a good you know 50 to 70 percent of existing non-brutal jobs are going to be impacted drastically by AI. The people that are going to make it are embracing it. They are augmenting themselves to become 20, 30, 40 times more than other people’s velocity. That’s going to be a big disruption, and the world is not ready for it.

We can debate all day long whether it should happen or not. But it will, you know, and if we don’t do it, China is going to do it. And so regardless, it’s going to happen. And so my take is people need to jump, you know, as much as possible and learn pump engineering and everything there is to know about how they can use these technologies to become augmented AI people. That’s going to be the next generation of workers.

Mr. Jekielek: I, for one, would like to kind of keep an air gap between myself and the AI.

Mr. Chaillan: It’s very important, of course, to look at privacy and safety and security and how you decouple kind of the way you’re gonna use this and the way you’re gonna let it give you insights at the end of the day shouldn’t pretty not replace humans although these basic tasks that you know we demonstrated are fine but when it comes to real-life decision-making you know the human needs to be the driver in the driver’s seat but it doesn’t mean you don’t also understand the pros and cons of the technology and unfortunately what you what you’ve seen is a lot of people focusing on the cons and instead of saying okay that’s a limitation of the technology how do i create my next company to overcome this and find a way to go above it and fix it.

That’s what we’ve done here at Ask Sage. We found solutions to keep shifting to the right, the limitations and the issues we were seeing with GenAI. And now, you know, we pushed it to universes that we didn’t think were possible. And it’s so sad to see people always looking at the bad things and not seeing those as opportunities to create value.

Mr. Jekielek: Tell me a little bit about the intersection of AI and nuclear capability. This strikes me as something that’s very important because you mentioned some of the limitations right now of AI usage at DOD.

Mr. Chaillan: Yes, there are few fields that are super essential to winning the next battles, right? And DOD is lagging behind, and we’re very excited to see a new administration that is eager to focus on all these fields, including hypersonics, quantum compute, AI, drones, and ready, even the nuclear deterrence of the United States is already in shambles because of some delays we’ve been seeing in some programs that we’re running, again, lagging behind schedule and way over budget. We’re talking 10x over budget.

So it’s concerning. We need to do better. We need to be less complacent. We need to have more urgency to get things done. And more importantly, we need to understand the kind of fights we’re going to be fighting moving forward. And I’m not sure it’s going to be much about jets and bombers, although they still need to be, you know, at the top of their innovation game. But it’s going to be also about, you know, software and drones and AI capabilities to empower humans to make better, faster decisions. And quite honestly, right now, the money spent on those domains were mostly wasted in paper research like DEI research on AI or ethics.

We spend so much money debating whether or not to use AI in a DoD, which is just mind-boggling to me. China is not wasting time pondering life, whether or not they should use AI in their weapon systems. And if they have it and we don’t, it’s the same as saying maybe we just don’t do nuclear systems anymore. And China has it and so be it. But I don’t think that’s a good answer. And I don’t think anyone would want to be living in that world.

Mr. Jekielek: Nick, I have to ask you about this ethics piece. I mean, it’s one thing to say whether it should be used at all. It’s a different thing not to have some significant ethical guidelines. And you’re right. Those guidelines may not exist in communist China at all. But, you know, you can kind of imagine, yes, autonomous weapons systems driven by AI that do what they want. You know, there’s been very popular movies made about this, right? We definitely don’t want that. Clearly, we have to have some kind of ethical framework.

Mr. Chaillan: We do, but it doesn’t mean you spend 100% of your budget on ethics. If I were to pick, I would probably spend 5% of the budget on ethics and 95% on capabilities. And look, we’re so far away from autonomous weapons powered by AI. We’re still trying to use it for contracts and for basic use of saving people’s time and headaches.

I had a great story with someone working as the executive officer for the secretary of the Air Force. He called me almost in tears saying, I was going to have to spend hours writing this report for the SECAF. I remembered we had Ask Sage, and I was able to do it in 15 minutes. Now I’m going to see my kids and I’m going to be able to spend time with my kids, thanks to you. That’s what we’re about. That’s so far away from people using it on weapons.

By the way, I agree that we should be very cautious when it comes to putting AI on weapons. The fact is, we’re going to have to get there, because China is going to do it. We demonstrated when we did dogfights with jets that humans lost every single one against jets powered by AI with no human on board. Humans don’t have the ability to move as fast and make decisions as fast as technology. We’re going to need to find a way to compete against these new weapons. And you don’t want to be the one playing catch up.

One thing that I think we all know from World War II and nuclear weapons is being the first was important. And playing catch up is never good. We can’t just dismiss the importance of getting autonomous weapons powered by AI with still control from humans. But the fact is, at some point when you get into a fight, it’s a face-to-face fight between two weapons, two AI-powered weapons. The human, to some degree, needs to allow the fight to begin and then get out of the way to let us win.

That’s the world we live in. We can pretend it’s not happening. Everybody says, we could have China sign some agreement and treaty to say they’re not going to do that. But they will sign it and do it anyway, so we can’t take that chance. Quite honestly, we’re so far away from those weapons, which scares me even more, because China is not. We need to keep that in mind as well.

Mr. Jekielek: Fascinating. Something I’ve been seeing a lot of videos of recently is these incredible light displays, which are actually something like tens of thousands of drones being used in coordination. And people have been noticing, and I’ve been thinking about, of course, the military potential of this kind of technology. Have you thought about this? And where are we at with that?

Mr. Chaillan: I pushed back in 2018, the creation of a special office to go after the defensive side of this and also the offensive side of swarming technologies. Honestly, we have not done nearly enough to even comprehend what can be done with those swarming technologies. The speed and the cost of these devices are mind-boggling, and most people don’t even know how quickly these things can move. And the time it would take to react, by the time you even understand what’s going on, the attack is already over, and you have nothing left to do.

So I can tell you it’s probably one of the biggest threats, and not just from the CCP, but also from terrorist organizations. Quite honestly, the cost being so low nowadays and the technology being so accessible, there is really no barrier to entry. And the fact that we’re not spending a significant amount of money in the defense budget and at DHS as well to go above and beyond to understand and have answers to prevent those attacks is very concerning.

Mr. Jekielek: Give me a scenario of what one of these attacks might look like.

Mr. Chaillan: The sky’s the limit with these attacks. They could put explosives on some of these drones, but even just using them to crash into things and just, you know, there’s so many things that can be done with swarming technologies to disrupt, you know, air traffic control, to disrupt airspace. The sky’s the limit. There’s almost nothing you cannot do with it from putting bombs on them, or just dropping them from the sky and hitting people and objects. It’s a very concerning capability.

You can see how well coordinated these can be and disconnected as well, which means if they lose control, they can still continue to behave and achieve whatever it is they were programmed to do. So a lot of the technology we use to disrupt their capabilities would not be impacted because they would still fly and go do what they’re trying to do. Many commercial drones are designed to fall out of the sky if they lose control from the controller and or just go back to where they took off.

The military version of these drones can be programmed to continue doing the last instructions they were given. So a lot of the technology we use to disable those drones would just not work efficiently to stop these attacks. Honestly, people go back to basics or birds to attack these drones. The nets technology is not realistic against a massive swarming attack. So I think we really need to wake up.

Mr. Jekielek: There’s a huge interest, as we’ve seen in the last several weeks in the U.S. government right now at cutting costs and getting rid of bloat with DOGE. Last night, I was listening to the first DOGE report in a shared space on X. What you’re describing sounds like requiring a lot more military spending. How do you view that?

Mr. Chaillan: I don’t think it is more military spending. I think we need to waste less of it on the wrong things. My take is that probably 70% of the defense budget is wasted on the wrong things. So we probably have too much money, if anything. It’s just spent on the wrong things by the wrong people that don’t understand technology, don’t understand the next battles we’re going to be fighting. They are stuck in time with a 50-year-old, outdated way of thinking.

We partnered with NATO for years when I was in the building at the Pentagon. I would be shocked by countries like Malaysia and smaller nations that don’t have the luxury of wasting their money, but yet had very good capabilities because they had very little money to spend and they had to spend it wisely. And there’s such a thing of having too much money. That’s what we have in DOGE, bringing people from the outside like me.

When I started back in 2018, I had zero engagement with the government. I knew nothing about it. But it took me two and a half years to be dangerous and to understand it enough to know what actions to take. They need to surround themselves with people that have this mentality of breaking things smartly and not just breaking things without understanding the impact of it. They really need to start bringing people that have a good understanding of what’s going on in DOD particularly.

I love that they’re spending time on other agencies, but let’s face it, the DOD is a beast and you need to go after it with the right people. You cannot just bring people that have never dealt with it. You cannot hope to have any kind of good success within two, three years if you don’t bring the right talent.

Mr. Jekielek: Nick, this has been a fascinating conversation. Congratulations on the success of your new company. Tell us where we can check out Ask Sage

Mr. Chaillan: We’re one of the first companies to be built and powered by GenAI. We were only two people, my wife and I, when we created it. Since then, we have grown to 20 people. But getting to the kind of valuation we got with two people plus GenAI demonstrates to your audience what they can do with the technology and create their next business, their next innovation. I built things I didn’t know how to build and didn’t know how to do it, but I was able to do it by being empowered by AI. If they want to check us out, we are at asksage.ai. They can go create an account for free and try it out.

Mr. Jekielek: Any final thoughts as we finish?

Mr. Chaillan: The most important is to make sure that the DOD starts AI technologies. I’m not just saying my company. They can pick whatever product they want to use. They need to pick the best of breed. We need to stop spending money building and creating government made up junk, which is exactly what we’ve been doing for the last four years, instead of investing and collaborating with industry.

The government is not good at building products and not good at innovating. We can’t keep up. There’s so much bureaucracy, so much paper weight. It’s impossible for the government to build technology at the speed of relevance. And so they need to partner with best of breed companies. And if you want to have a chance at winning against China, we need to reignite the partnership between the private sector and the public sector.

Honestly, we’re so far behind. Silicon Valley refused to work with DOD many times. Yet, when a few companies are willing to collaborate with the Department of Defense, DOD continues to compete using research and development money illegally to compete against the commercial companies. That’s not how we’re going to win.

If there’s one thing that the administration needs to look at, it is why the teams at the Air Force Research Laboratory are spending millions building their own technology in a vacuum, instead of partnering with Google, OpenAI, Ask Sage or whatever company, right? Pick your poison, but use the best of breed. Don’t reinvent the wheel. Let’s augment the capabilities to empower airmen and guardians and all service members to be more efficient instead of building stuff in a vacuum.

Mr. Jekielek: Nicolas Chaillan, it’s such a pleasure to have you on the show.

Mr. Chaillan: Thanks for having me.

Share This Article:
Jan Jekielek is a senior editor with The Epoch Times, host of the show “American Thought Leaders.” Jan’s career has spanned academia, international human rights work, and now for almost two decades, media. He has interviewed nearly a thousand thought leaders on camera, and specializes in long-form discussions challenging the grand narratives of our time. He’s also an award-winning documentary filmmaker, producing “The Unseen Crisis,” “DeSantis: Florida vs. Lockdowns,” and “Finding Manny.”

©2023-2025 California Insider All Rights Reserved. California Insider is a part of Epoch Media Group.