Add to your saved storiesSave
MR. DE VYNCK: Hello. That was a long quote from you, Alex.
MR. DE VYNCK: My name is Gerrit De Vynck. I am a tech reporter with The Washington Post based in San Francisco, and I'm joined by Alex Wang, who you just saw that great intro about.
It's really cool to see so many people here. Thank you so much for coming. This event was really oversubscribed, and I think, you know, we both go to a lot of events that are both in San Francisco usually, and it's cool to come here to the other side of the country and see, you know, just the level of interest and--like in NSF, it's kind of everyone--everything everyone's been talking about. Maybe we're even a little bit sick about it, but it's cool to say, okay, it's not just us. It's not just the weird, you know, hippie commune out in the West Coast that's talking about this stuff. So thank you all for coming.
Alex, I think we can just kind of jump right into it, but I just want to make sure for people who maybe haven't heard of Scale or maybe have heard of you but don't really know what you do. Can you just in a couple sentences explain, you know, your company? Like how are you different from a company like OpenAI or Google or Microsoft that's also doing AI right now?
MR. WANG: Yeah, that's a great question. So I started the company back in 2016. I was studying artificial intelligence at MIT and became--this was the year when DeepMind came out with AlphaGo, when Google released TensorFlow, and it became very clear that artificial intelligence, even at that time, was going to be one of the most important technologies certainly of my lifetime.
And I started Scale really to build the foundations and power the development of AI with the most critical piece, the data.
Since then, since starting the company in 2016, over the past seven years, we've been a part of every major advancement in artificial intelligence. We've worked with many of the large autonomous vehicle efforts, including General Motors, Toyota, and many of the large automakers. We've worked closely with the U.S. government and the DoD on many of the initial AI programs. I'm sure we'll talk about that here in a bit. And then we've worked very closely with the entire emergence of generative AI and large language models. We've worked with OpenAI since 2019, innovated on many novel methods and use of data for artificial intelligence, and work at this point with the majority, vast majority of the AI ecosystem, players like Meta, Microsoft, and many others.
MR. DE VYNCK: So it's kind of like picks and shovels of like this AI gold rush. Like you're selling the tools to help these companies develop these chatbots and LLMs and that kind of thing?
MR. WANG: Yeah. I think our differentiated view is perhaps that, you know, this entire industry needs platforms, and it needs infrastructure to enable it to be successful. And so our view is, the best way to enable that all to happen is to power it in infrastructure platform that everybody can use and the entire industry can benefit from.
MR. DE VYNCK: And you mentioned the government. Obviously, your quotes, you know, your congressional testimony talking a lot about military use, talking a lot about China, geopolitics. You have a $250 million deal with Pentagon. I mean, that's a serious amount of money. I mean, why--you know, coming from San Francisco, it's not something that we hear a lot. We don't talk that much about even government tech and especially military tech. A lot of people out there are still uncomfortable with that. Why did you choose to kind of position your company this way to, you know, really aggressively sell to the Pentagon? What's behind that choice?
MR. WANG: Yeah. So I grew up in Los Alamos, New Mexico. Fr those of you watched "Oppenheimer," it was literally filmed in many places from my childhood. And both my parents worked for the National Lab in Los Alamos, working on fusion physics and weapons technology, and so I grew up in this hotbed of, you know, these incredibly brilliant scientists who had made it their career and made the decision to dedicate their lives towards building very advanced technology to ensure that we maintain U.S. leadership.
And as artificial intelligence became a more and more real technology over the past decade, it became pretty clear that this technology was one of the very few technologies that had the ability to impact the balance of power globally, and in particular, the sort of--you know, China published their strategy, "Made in China 2025," of which chips and AI are some of the key tenets. You know, they talk specifically about how they believe a future world will be primarily dominated by chips and semiconductors, and they need to invest heavily into that technology to enable future technologies such as AI.
And so, in particular, I went to China in 2018. I visited China--one of our investors organized this trip--to understand the Chinese tech ecosystem, and in one of the companies, a Chinese facial recognition company, you walk in the lobby, and there's a giant screen that shows like a video feed of the lobby and your face immediately gets recognized as you walk in, and real time, you see your face be recognized. It recognized who you are, your major demographics, like all this like very dystopian--it was a very dystopian tech demo. And this was back in 2018 and basically realized that other countries were going to be--particularly China--was going to be very, very dedicated in using artificial intelligence to power their country's ambitions, let's say, and this was--you know, this is well reported. They use facial recognition technology very actively to suppress Uyghurs and for in building a global surveillance state.
And it became pretty clear that, you know, if you believe that AI is going to be one of these critical technologies, there had to be American companies who could help bridge the gap between Silicon Valley and D.C. and could help bridge the gap between this incredible wellspring of technology and innovation that was happening in San Francisco, happening in Silicon Valley, and bring that technology to the U.S. government to actually empower America to stay ahead, to stay in a leadership position.
MR. DE VYNCK: Right. I mean, we just heard from Leader Schumer and he was--you know, he said we are ahead--or the U.S. is ahead, but the gap is narrowing was his characterization of it. You know, you and I were just talking backstage about chips and kind of, you know, how we're--you know, a lot of time when we talk about AI, we think about software. We think about things that are, you know, learning themselves. But at the end of the day, hardware is a huge part of this. And so, I mean, do you think that characterization of, you know, ahead but the gap is narrowing is accurate, and how do you think about this race or this arms race, so to speak?
MR. WANG: Yeah. You know what? What I would probably say is certainly the U.S. is ahead. The technologies were invented and innovated and developed predominantly in the United States. London is--you know, the UK as well has been a key, a key innovation hub for the technology. And so we're ahead today.
I think China has incredible ambitions to catch up from a technological perspective, and they've demonstrated in the past, in both software and other AI technologies, a clear ability to, you know, catch up and in some cases even surpass U.S. tech capabilities.
If we look the last generation of artificial intelligence, computer vision technology, so being able to understand images and videos for technologies like facial recognition or self-driving cars, China was behind. You know, these technologies were created and developed in the United States. China recognized that, immediately created very large domestic industries to fuel this AI development in facial recognition, in autonomous vehicles and so on. And now if you look at where is the cutting-edge computer vision technology being built, it's actually in China. You know, they successfully caught up and got ahead.
And so my fear is in this--you know, in this current wave is that in large language models, in cutting-edge generative AI, and AI technologies, the same might happen yet again.
You know, we saw--it was reported earlier this year that China has bought $5 billion worth of high-end GPUs, predominantly NVIDIA GPUs. That's an incredible investment. It's a very--that's a very large and decisive investment by Chinese tech giants to catch up to American technology.
And in the backdrop of everything that's happening now in AI is the scaling laws, and this is, I think--you know, it's sort of a little bit behind the scenes, but this is the underlying trend that's defining everything, which is, you know, simply put, we're using just dramatically more compute, dramatically bigger models, and dramatically more data to build dramatically more powerful algorithms.
So in the past four years, there's been a thousand-fold increase in the amount of data used to power large-scale AI systems. So, you know, in 2019, the models were about 2 billion parameters in size, and now they're about 2 trillion parameters in size.
Many companies are on the record for over the next three years, roughly three years, for another hundred-fold scale-up in computational capacity for these--for these algorithms.
So over the course of, you know, that seven-year span, it's a hundred thousand-fold increase in amount of computational power applied to training these large generative models. And that--you know, there's very few industries where you see a--over a seven-year period, a hundred thousand-fold increase in resources. And so this creates a lot of--this creates a lot of pressure in how countries think about this technology, and in particular, it creates a lot of pressure on the supply chain.
So kind of as you mentioned, this depends a lot on hardware. It depends a lot on high-end GPUs, particularly GPUs manufactured and produced by NVIDIA.
And, you know, we saw recently increased sort of export controls on chips. I think this is going to be an increasingly hotbed issue for the U.S. versus other countries, and today, a hundred percent of high-end GPUs are manufactured in Taiwan at TSMC.
MR. WANG: So there's a very clear geopolitical tension that only increases--it will increase literally a hundred-fold over the next three years, which is that today there's an entire--the choke point of the entire AI industry and all AI progress comes in these fabs in Taiwan at TSMC. And so if--you know, there's a lot of ways this plays out. There's many scenarios, but in one such scenario where China deems that they're falling dramatically far behind, it makes it far more likely that they'd choose to invade Taiwan and either all the fabs in Taiwan blow up and TSMC blows up and set back AI progress across the board or they seize them and then use that production solely for their own purposes.
MR. DE VYNCK: I mean, there's a lot of people in Silicon Valley, prominent AI leaders, powerful AI leaders who are, you know, talking about AI algorithms beginning to outthink humans in years rather than decades. And I think, you know, I've been very skeptical about this, but these are very smart people who have serious chops and have huge amounts of followings within the industry. And some of them say, you know, the worst thing you could do is attach something like that to a military system. And so, I mean, how do you engage with that, or how do you think about that belief that AI will, you know, outstrip human ability to control it imminently? Like do you take that seriously at all, or like where does--what do you think of that?
MR. WANG: If you look at the existing technology that we have as well as the technology that's coming down the pipe and sort of like all of the research and understanding of where this technology is going, I don't think that's a reasonable fear as of now.
I do think that this technology is incredibly powerful, both for use of ensuring that democratic powers stay on top and that the United States maintains a leadership position, and there's real misuse cases, and there's things that we need to be concerned about the technology being used for.
Our view is that AI--you know, if you look at the history of warfare, it's punctuated by technology, technological advancements that create asymmetric advantage.
MR. WANG: That's sort of the--you know, summarize centuries and centuries of warfare--and artificial intelligence is one of a small handful of technologies. It's not the only technology, but it's one of a small handful of technologies that has the potential to shift that balance of power going forward.
You know, we talked about this almost exactly one year ago with Eric Schmidt, and I think that the--you know, the CCP is very clear about their ambitions. They're very clear that they believe--you know, there's some writings that they have where they talk about AI as a potential leapfrog technology for the PLA versus the U.S. DoD. They believe that, you know, if they overinvest into AI and the United States by--in parallel underinvests in artificial intelligence, because we're going to overinvest into maintaining our existing systems, they could actually develop far superior capabilities than us in the United States.
So broadly speaking, if you zoom all the way out, I think this is--this is one of the key technologies for military power and hard power over the next--over the coming years, and we need to be--we need to be thinking about it as such.
MR. DE VYNCK: Do you draw any like red lines for yourself, though? Because, you know, obviously, you're providing infrastructure for the government. You're providing tools for the government to, you know, crunch their data, to get smarter, to get faster. But if there was a bid for, say, a couple years, your own tech is advanced, and there's a bid for some kind of maybe cyber weapon that would go and disrupt an enemy nation's energy infrastructure at a time of war and you had the capacity to build something like that, would you bid for an offensive weapon like that?
MR. WANG: Yes. So our view--you know, the DoD has actually spent a lot of time thinking about these questions. And I think the ethics of the use of artificial intelligence has been one of the primary pillars of their exploration and their effort. The DoD published their ethical AI principles a number of years ago, long before the technology was even as powerful as it is today, let alone even more powerful, to do a lot of, I think, preemptive thinking about what happens as this technology becomes more and more powerful. And I think they're very thoughtful, and I think, in general, our view is that we should build technologies that adhere ultimately to the DoD's ethical AI principles.
There's yet an additional piece which is the--let's say how do you enforce that we actually adhere to these principles, right? And our view is that there has to be a lot of advancements in testing and evaluation of AI systems.
I think the greatest fear of many military commanders I've spoken to is that there will be some decision that's made, rightly or wrongly, to deploy a very immature AI system that could then create dramatic risks of our soldiers on the battlefield. And so I think we need to be thinking about what does it mean to actually have mature AI technology versus hype-driven AI technology, and how do we ensure that any technology that we deploy goes through the proper rigorous testing and evaluation of, you know, red teaming and deep, deep sort of principles-based assessment to ensure that we have, you know, actually effective systems?
MR. DE VYNCK: Right, right. And, I mean, like are you--do you think we're there? Like, you know, because there's also some autonomous weapon systems that are already out there, you know, that other countries are using. There's, you know, drones that are able to kind of like detect certain targets and make decisions sort of on their own, based on their own programming without a human necessarily in the loop. And so, in some ways, it feels like this stuff is already getting out of our hands.
MR. WANG: You know, our view and in our conversations with most of the leaders in the DoD is that humans are always quite necessarily in the loop. The technology as it stands today is primarily useful as a decision aid, not a decision-maker, and, you know, there's a lot of a very, very advanced military analysis on this matter. I think, you know, if you were to sum it up overall, it's that there's--today one of the key problems impacting our military is that there's too much information but too little intelligence.
MR. WANG: You know, there's an inundation of information coming from all sorts of different sensors and platforms and the ability to synthesize that into core intelligence that can help military leaders and commanders understand what they should do. That's the missing gap. That's very different from, I think, fully autonomous weaponry or fully autonomous operations. I think it's more about decision aid and helping human decision-makers and human operators be able to operate more effectively.
MR. DE VYNCK: You know, a lot of your business model requires contract workers to kind of like assess technology, label things. This is something that obviously not just you, but the entire AI industry, there's a lot of humans that are behind it. And some colleagues of mine earlier this summer reported--you know, went and spoke to some contractors of yours in the Philippines who, you know, weren't getting all the money that they believed that they were entitled to. And, you know, you don't need to talk specifically about your situation, but if you talk--if you look at the industry as a whole, there's still a lot of human involvement, right? And, you know, where in terms of that contract workforce--like is that something that you think for years and years and years as AI continues to get smarter, we will need, you know, hundreds of thousands of humans to be involved in that painstaking work, or is that something that is only really at the beginning of the tech development, and then down the road, it might not be necessary anymore?
MR. WANG: Our view is that humans will always be very, very critical towards the development of AI technology. And so there will always be humans in the loop. There will always be humans involved in the actual development of the algorithms that are used.
You know, back in 2019, we actually worked very closely with OpenAI to innovate and develop some of the--today, very cutting-edge techniques to enable humans to provide input and preferences into the models to be able to guide their behavior. We know we developed this technique called "reinforcement learning with human feedback," RLHF, that has now become a cornerstone of the entire AI industry in ensuring that we build very helpful and harmless AI models. There have been--you know, OpenAI has published some of their research on this. They've reported that, you know, through use of reinforcement learning with human feedback, they're able to achieve an improvement in the helpfulness of the models equivalent to 100-fold increase in model size. Simply put, what that means is, there's a--there's almost a quantum leap forward in the ability to build AI systems that actually adhere to human intent, adhere to human principles, because of this technology that we've developed with them.
MR. DE VYNCK: Just got a minute left, and I want to ask you the same question that we asked Senator Schumer, which is, you know, you've testified a lot. You've talked a lot about, you know, concerns, risks. You've mentioned even in this conversation about, you know, guardrails and testing and evaluation. I mean, what--but if you just zoom out and think about AI in general and how quickly this technology is moving, what keeps you up at night?
MR. WANG: I think global proliferation of the technology is the most concerning trend today. If you look at what's happened just in the past year since ChatGPT, you've seen it become a primarily domestic technology to being an incredibly international technology. Some of the most advanced open-source models were developed in Paris and France. There's been very large open-source models being developed in UAE and in the Middle East, and then China, as I mentioned, has bought $5 billion worth of high-end chips to put--you know, put their own hat into the ring of AI development.
And the technology is at risk of real misuse. You know, some of the risks that keep me up at night, the most are misuse in cyberattacks and misuse in bioweaponry, and these are some of the use cases of technology that I think could really negatively impact humanity and could have very, very negative consequences for us all.
MR. DE VYNCK: All right. Well, thank you very much for joining us, Alex.
MR. WANG: Thanks, Gerrit.
MR. DE VYNCK: Dont go anywhere. Well have another guest very soon.
The rest is here:
Transcript: The Futurist Summit: The Battlefields of AI with Scale AI ... - The Washington Post
Read More..