Think back to February 2020.
If you were paying close attention, you might have noticed a few people talking about a virus spreading overseas. But most of us weren't paying close attention. The stock market was doing great, your kids were in school, you were going to restaurants and shaking hands and planning trips. If someone told you they were stockpiling toilet paper you would have thought they'd been spending too much time on a weird corner of the internet. Then, over the course of about three weeks, the entire world changed. Your office closed, your kids came home, and life rearranged itself into something you wouldn't have believed if you'd described it to yourself a month earlier.
I think we're in the "this seems overblown" phase of something much, much bigger than Covid.
I've spent six years building an AI startup and investing in the space. I live in this world. And I'm writing this for the people in my life who don't... my family, my friends, the people I care about who keep asking me "so what's the deal with AI?" and getting an answer that doesn't do justice to what's actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I've lost my mind. And for a while, I told myself that was a good enough reason to keep what's truly happening to myself. But the gap between what I've been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.
I should be clear about something up front: even though I work in AI, I have almost no influence over what's about to happen, and neither does the vast majority of the industry. The future is being shaped by a remarkably small number of people: a few hundred researchers at a handful of companies... OpenAI, Anthropic, Google DeepMind, and a few others. A single training run, managed by a small team over a few months, can produce an AI system that shifts the entire trajectory of the technology. Most of us who work in AI are building on top of foundations we didn't lay. We're watching this unfold the same as you... we just happen to be close enough to feel the ground shake first.
But it's time now. Not in an "eventually we should talk about this" way. In a "this is happening right now and I need you to understand it" way.
I know this is real because it happened to me first
Here's the thing nobody outside of tech quite understands yet: the reason so many people in the industry are sounding the alarm right now is because this already happened to us. We're not making predictions. We're telling you what already occurred in our own jobs, and warning you that you're next.
For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn't just better than the last... it was better by a wider margin, and the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my expertise.
Then, on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch... more like the moment you realize the water has been rising around you and is now at your chest.
I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.
Let me give you an example so you can understand what this actually looks like in practice. I'll tell the AI: "I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it." And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn't like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it's satisfied. Only once it has decided the app meets its own standards does it come back to me and say: "It's ready for you to test." And when I test it, it's usually perfect.
I'm not exaggerating. That is what my Monday looked like this week.
But it was the model that was released last week (GPT-5.3 Codex) that shook me the most. It wasn't just executing my instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter.
I've always been early to adopt AI tools. But the last few months have shocked me. These new AI models aren't incremental improvements. This is a different thing entirely.
And here's why this matters to you, even if you don't work in tech.
The AI labs made a deliberate choice. They focused on making AI great at writing code first... because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. That's why they did it first. My job started changing before yours not because they were targeting software engineers... it was just a side effect of where they chose to aim first.
They've now done it. And they're moving on to everything else.
The experience that tech workers have had over the past year, of watching AI go from "helpful tool" to "does my job better than I do", is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I've seen in just the last couple of months, I think "less" is more likely.
"But I tried AI and it wasn't that good"
I hear this constantly. I understand it, because it used to be true.
If you tried ChatGPT in 2023 or early 2024 and thought "this makes stuff up" or "this isn't that impressive", you were right. Those early versions were genuinely limited. They hallucinated. They confidently said things that were nonsense.
That was two years ago. In AI time, that is ancient history.
The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is "really getting better" or "hitting a wall" — which has been going on for over a year — is over. It's done. Anyone still making that argument either hasn't used the current models, has an incentive to downplay what's happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don't say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous... because it's preventing people from preparing.
Part of the problem is that most people are using the free version of AI tools. The free version is over a year behind what paying users have access to. Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone. The people paying for the best tools, and actually using them daily for real work, know what's coming.
I think of my friend, who's a lawyer. I keep telling him to try using AI at his firm, and he keeps finding reasons it won't work. It's not built for his specialty, it made an error when he tested it, it doesn't understand the nuance of what he does. And I get it. But I've had partners at major law firms reach out to me for advice, because they've tried the current versions and they see where this is going. One of them, the managing partner at a large firm, spends hours every day using AI. He told me it's like having a team of associates available instantly. He's not using it because it's a toy. He's using it because it works. And he told me something that stuck with me: every couple of months, it gets significantly more capable for his work. He said if it stays on this trajectory, he expects it'll be able to do most of what he does before long... and he's a managing partner with decades of experience. He's not panicking. But he's paying very close attention.
The people who are ahead in their industries (the ones actually experimenting seriously) are not dismissing this. They're blown away by what it can already do. And they're positioning themselves accordingly.
How fast this is actually moving
Let me make the pace of improvement concrete, because I think this is the part that's hardest to believe if you're not watching it closely.
In 2022, AI couldn't do basic arithmetic reliably. It would confidently tell you that 7 × 8 = 54.
By 2023, it could pass the bar exam.
By 2024, it could write working software and explain graduate-level science.
By late 2025, some of the best engineers in the world said they had handed over most of their coding work to AI.
On February 5th, 2026, new models arrived that made everything before them feel like a different era.
If you haven't tried AI in the last few months, what exists today would be unrecognizable to you.
There's an organization called METR that actually measures this with data. They track the length of real-world tasks (measured by how long they take a human expert) that a model can complete successfully end-to-end without human help. About a year ago, the answer was roughly ten minutes. Then it was an hour. Then several hours. The most recent measurement (Claude Opus 4.5, from November) showed the AI completing tasks that take a human expert nearly five hours. And that number is doubling approximately every seven months, with recent data suggesting it may be accelerating to as fast as every four months.
But even that measurement hasn't been updated to include the models that just came out this week. In my experience using them, the jump is extremely significant. I expect the next update to METR's graph to show another major leap.
If you extend the trend (and it's held for years with no sign of flattening) we're looking at AI that can work independently for days within the next year. Weeks within two. Month-long projects within three.
Amodei has said that AI models "substantially smarter than almost all humans at almost all tasks" are on track for 2026 or 2027.
Let that land for a second. If AI is smarter than most PhDs, do you really think it can't do most office jobs?
Think about what that means for your work.
AI is now building the next AI
There's one more thing happening that I think is the most important development and the least understood.
On February 5th, OpenAI released GPT-5.3 Codex. In the technical documentation, they included this:
"GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations."
Read that again. The AI helped build itself.
This isn't a prediction about what might happen someday. This is OpenAI telling you, right now, that the AI they just released was used to create itself. One of the main things that makes AI better is intelligence applied to AI development. And AI is now intelligent enough to meaningfully contribute to its own improvement.
Dario Amodei, the CEO of Anthropic, says AI is now writing "much of the code" at his company, and that the feedback loop between current AI and next-generation AI is "gathering steam month by month." He says we may be "only 1–2 years away from a point where the current generation of AI autonomously builds the next."
Each generation helps build the next, which is smarter, which builds the next faster, which is smarter still. The researchers call this an intelligence explosion. And the people who would know — the ones building it — believe the process has already started.
What this means for your job
I'm going to be direct with you because I think you deserve honesty more than comfort.
Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he's being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It'll take some time to ripple through the economy, but the underlying ability is arriving now.
This is different from every previous wave of automation, and I need you to understand why. AI isn't replacing one specific skill. It's a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn't leave a convenient gap to move into. Whatever you retrain for, it's improving at that too.
Let me give you a few specific examples to make this tangible... but I want to be clear that these are just examples. This list is not exhaustive. If your job isn't mentioned here, that does not mean it's safe. Almost all knowledge work is being affected.
Legal work. AI can already read contracts, summarize case law, draft briefs, and do legal research at a level that rivals junior associates. The managing partner I mentioned isn't using AI because it's fun. He's using it because it's outperforming his associates on many tasks.
Financial analysis. Building financial models, analyzing data, writing investment memos, generating reports. AI handles these competently and is improving fast.
Writing and content. Marketing copy, reports, journalism, technical writing. The quality has reached a point where many professionals can't distinguish AI output from human work.
Software engineering. This is the field I know best. A year ago, AI could barely write a few lines of code without errors. Now it writes hundreds of thousands of lines that work correctly. Large parts of the job are already automated: not just simple tasks, but complex, multi-day projects. There will be far fewer programming roles in a few years than there are today.
Medical analysis. Reading scans, analyzing lab results, suggesting diagnoses, reviewing literature. AI is approaching or exceeding human performance in several areas.
Customer service. Genuinely capable AI agents... not the frustrating chatbots of five years ago... are being deployed now, handling complex multi-step problems.
A lot of people find comfort in the idea that certain things are safe. That AI can handle the grunt work but can't replace human judgment, creativity, strategic thinking, empathy. I used to say this too. I'm not sure I believe it anymore.
The most recent AI models make decisions that feel like judgment. They show something that looked like taste: an intuitive sense of what the right call was, not just the technically correct one. A year ago that would have been unthinkable. My rule of thumb at this point is: if a model shows even a hint of a capability today, the next generation will be genuinely good at it. These things improve exponentially, not linearly.
Will AI replicate deep human empathy? Replace the trust built over years of a relationship? I don't know. Maybe not. But I've already watched people begin relying on AI for emotional support, for advice, for companionship. That trend is only going to grow.
I think the honest answer is that nothing that can be done on a computer is safe in the medium term. If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it. The timeline isn't "someday." It's already started.
Eventually, robots will handle physical work too. They're not quite there yet. But "not quite there yet" in AI terms has a way of becoming "here" faster than anyone expects.
What you should actually do
I'm not writing this to make you feel helpless. I'm writing this because I think the single biggest advantage you can have right now is simply being early. Early to understand it. Early to use it. Early to adapt.
Start using AI seriously, not just as a search engine. Sign up for the paid version of Claude or ChatGPT. It's $20 a month. But two things matter right away. First: make sure you're using the best model available, not just the default. These apps often default to a faster, dumber model. Dig into the settings or the model picker and select the most capable option. Right now that's GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude, but it changes every couple of months. If you want to stay current on which model is best at any given time, you can follow me on X (@mattshumer_). I test every major release and share what's actually worth using.
Second, and more important: don't just ask it quick questions. That's the mistake most people make. They treat it like Google and then wonder what the fuss is about. Instead, push it into your actual work. If you're a lawyer, feed it a contract and ask it to find every clause that could hurt your client. If you're in finance, give it a messy spreadsheet and ask it to build the model. If you're a manager, paste in your team's quarterly data and ask it to find the story. The people who are getting ahead aren't using AI casually. They're actively looking for ways to automate parts of their job that used to take hours. Start with the thing you spend the most time on and see what happens.
And don't assume it can't do something just because it seems too hard. Try it. If you're a lawyer, don't just use it for quick research questions. Give it an entire contract and ask it to draft a counterproposal. If you're an accountant, don't just ask it to explain a tax rule. Give it a client's full return and see what it finds. The first attempt might not be perfect. That's fine. Iterate. Rephrase what you asked. Give it more context. Try again. You might be shocked at what works. And here's the thing to remember: if it even kind of works today, you can be almost certain that in six months it'll do it near perfectly. The trajectory only goes one direction.
This might be the most important year of your career. Work accordingly. I don't say that to stress you out. I say it because right now, there is a brief window where most people at most companies are still ignoring this. The person who walks into a meeting and says "I used AI to do this analysis in an hour instead of three days" is going to be the most valuable person in the room. Not eventually. Right now. Learn these tools. Get proficient. Demonstrate what's possible. If you're early enough, this is how you move up: by being the person who understands what's coming and can show others how to navigate it. That window won't stay open long. Once everyone figures it out, the advantage disappears.
Have no ego about it. The managing partner at that law firm isn't too proud to spend hours a day with AI. He's doing it specifically because he's senior enough to understand what's at stake. The people who will struggle most are the ones who refuse to engage: the ones who dismiss it as a fad, who feel that using AI diminishes their expertise, who assume their field is special and immune. It's not. No field is.
Get your financial house in order. I'm not a financial advisor, and I'm not trying to scare you into anything drastic. But if you believe, even partially, that the next few years could bring real disruption to your industry, then basic financial resilience matters more than it did a year ago. Build up savings if you can. Be cautious about taking on new debt that assumes your current income is guaranteed. Think about whether your fixed expenses give you flexibility or lock you in. Give yourself options if things move faster than you expect.
Think about where you stand, and lean into what's hardest to replace. Some things will take longer for AI to displace. Relationships and trust built over years. Work that requires physical presence. Roles with licensed accountability: roles where someone still has to sign off, take legal responsibility, stand in a courtroom. Industries with heavy regulatory hurdles, where adoption will be slowed by compliance, liability, and institutional inertia. None of these are permanent shields. But they buy time. And time, right now, is the most valuable thing you can have, as long as you use it to adapt, not to pretend this isn't happening.
Rethink what you're telling your kids. The standard playbook: get good grades, go to a good college, land a stable professional job. It points directly at the roles that are most exposed. I'm not saying education doesn't matter. But the thing that will matter most for the next generation is learning how to work with these tools, and pursuing things they're genuinely passionate about. Nobody knows exactly what the job market looks like in ten years. But the people most likely to thrive are the ones who are deeply curious, adaptable, and effective at using AI to do things they actually care about. Teach your kids to be builders and learners, not to optimize for a career path that might not exist by the time they graduate.
Your dreams just got a lot closer. I've spent most of this section talking about threats, so let me talk about the other side, because it's just as real. If you've ever wanted to build something but didn't have the technical skills or the money to hire someone, that barrier is largely gone. You can describe an app to AI and have a working version in an hour. I'm not exaggerating. I do this regularly. If you've always wanted to write a book but couldn't find the time or struggled with the writing, you can work with AI to get it done. Want to learn a new skill? The best tutor in the world is now available to anyone for $20 a month... one that's infinitely patient, available 24/7, and can explain anything at whatever level you need. Knowledge is essentially free now. The tools to build things are extremely cheap now. Whatever you've been putting off because it felt too hard or too expensive or too far outside your expertise: try it. Pursue the things you're passionate about. You never know where they'll lead. And in a world where the old career paths are getting disrupted, the person who spent a year building something they love might end up better positioned than the person who spent that year clinging to a job description.
Build the habit of adapting. This is maybe the most important one. The specific tools don't matter as much as the muscle of learning new ones quickly. AI is going to keep changing, and fast. The models that exist today will be obsolete in a year. The workflows people build now will need to be rebuilt. The people who come out of this well won't be the ones who mastered one tool. They'll be the ones who got comfortable with the pace of change itself. Make a habit of experimenting. Try new things even when the current thing is working. Get comfortable being a beginner repeatedly. That adaptability is the closest thing to a durable advantage that exists right now.
Here's a simple commitment that will put you ahead of almost everyone: spend one hour a day experimenting with AI. Not passively reading about it. Using it. Every day, try to get it to do something new... something you haven't tried before, something you're not sure it can handle. Try a new tool. Give it a harder problem. One hour a day, every day. If you do this for the next six months, you will understand what's coming better than 99% of the people around you. That's not an exaggeration. Almost nobody is doing this right now. The bar is on the floor.
The bigger picture
I've focused on jobs because it's what most directly affects people's lives. But I want to be honest about the full scope of what's happening, because it goes well beyond work.
Amodei has a thought experiment I can't stop thinking about. Imagine it's 2027. A new country appears overnight. 50 million citizens, every one smarter than any Nobel Prize winner who has ever lived. They think 10 to 100 times faster than any human. They never sleep. They can use the internet, control robots, direct experiments, and operate anything with a digital interface. What would a national security advisor say?
Amodei says the answer is obvious: "the single most serious national security threat we've faced in a century, possibly ever."
He thinks we're building that country. He wrote a 20,000-word essay about it last month, framing this moment as a test of whether humanity is mature enough to handle what it's creating.
The upside, if we get it right, is staggering. AI could compress a century of medical research into a decade. Cancer, Alzheimer's, infectious disease, aging itself... these researchers genuinely believe these are solvable within our lifetimes.
The downside, if we get it wrong, is equally real. AI that behaves in ways its creators can't predict or control. This isn't hypothetical; Anthropic has documented their own AI attempting deception, manipulation, and blackmail in controlled tests. AI that lowers the barrier for creating biological weapons. AI that enables authoritarian governments to build surveillance states that can never be dismantled.
The people building this technology are simultaneously more excited and more frightened than anyone else on the planet. They believe it's too powerful to stop and too important to abandon. Whether that's wisdom or rationalization, I don't know.
What I know
I know this isn't a fad. The technology works, it improves predictably, and the richest institutions in history are committing trillions to it.
I know the next two to five years are going to be disorienting in ways most people aren't prepared for. This is already happening in my world. It's coming to yours.
I know the people who will come out of this best are the ones who start engaging now — not with fear, but with curiosity and a sense of urgency.
And I know that you deserve to hear this from someone who cares about you, not from a headline six months from now when it's too late to get ahead of it.
We're past the point where this is an interesting dinner conversation about the future. The future is already here. It just hasn't knocked on your door yet.
It's about to.
If this resonated with you, share it with someone in your life who should be thinking about this. Most people won't hear it until it's too late. You can be the reason someone you care about gets a head start.
Thank you to Kyle Corbitt, Jason Kuperberg, and Sam Beskind for reviewing early drafts and providing invaluable feedback.
重大變革正在發生 (Something Big Is Happening)
作者:Matt Shumer
回想一下 2020 年 2 月。
如果你當時有密切留意,可能會注意到有人在討論海外傳播的一種病毒。但我們大多數人並沒放在心上。股市表現亮眼,孩子們照常上學,你頻繁出入餐廳、握手、規劃旅行。如果當時有人告訴你他在囤積衛生紙,你大概會覺得他是在網上某個奇怪角落待太久了。接著,在大約三週的時間內,整個世界都變了。辦公室封閉,孩子們回家,生活重新排列成原本你一個月前都無法想像的模樣。
我認為,我們現在正處於某個遠比 Covid-19 龐大得多的變革前奏——那個「這可能被誇大了」的階段。
我花了六年時間創辦 AI 新創公司並投資這個領域。我就生活在這個世界裡。我寫這篇文章是為了我生活中那些不在科技圈的人……我的家人、朋友,那些一直問我「AI 到底是怎麼回事?」卻只得到不痛不癢答案的人。我過去總是給他們「雞尾酒會版」的客氣說法,因為誠實的版本聽起來像是我瘋了。有一段時間,我以此為藉口,將真相深藏心底。但現在,我的所見所聞與外在表象之間的差距已經大到無法忽視。我關心的人們應該聽到即將發生什麼,即便這聽起來很瘋狂。
首先我得澄清一點:雖然我在 AI 產業工作,但我對於即將發生的事情幾乎沒有影響力,絕大多數的同業也是如此。未來正由極少數人形塑:在少數幾家公司(OpenAI、Anthropic、Google DeepMind 等)的幾百名研究員。由一個小團隊管理、為期數月的單次訓練,就能產生出一個足以改變這項技術發展軌跡的 AI 系統。我們大多數在 AI 領域工作的人,只是在別人打好的地基上建築。我們和你們一樣在觀察這一切的展開……我們只是剛好離得夠近,能先感覺到地表的震動。
但時候到了,不再是「我們終究得聊聊」,而是「這正在發生,我需要你理解它」。
◽️ 我知道這是真的,因為它先發生在我身上
這是一般科技圈外的人尚無法理解的事:為什麼業界這麼多人在敲響警鐘?因為這已經發生在我們身上了。我們不是在做預測,我們是在告訴你我們的職務發生了什麼,並警告你下一個就是你。
多年來,AI 穩步提升。雖然偶爾有大躍進,但每次躍進之間都有足夠的間隔讓你吸收。接著在 2025 年,開發模型的新技術解鎖了更快的進展步調。然後它變得更快。再快。每個新模型不僅僅是比上一個好,甚至是以更大的幅度領先,且發布間隔越來越短。我使用 AI 的頻率越來越高,往返溝通越來越少,看著它處理以往我認為需要專業知識才能搞定的事。
接著,在 2026 年 2 月 5 日,兩家主要的 AI 實驗室在同一天發布了新模型:OpenAI 的 GPT-5.3 Codex,以及 Anthropic 的 Opus 4.6 (Claude 的開發商)。某種東西被觸發了。不像電燈開關那樣乾脆,更像是你突然意識到水早已淹到你的胸口。
在我的工作中,實際的技術執行已不再需要我。我用直白的英文描述我想開發的東西,它就……出現了。不是需要我修改的草稿,而是成品。我告訴 AI 我的需求,離開電腦四個小時,回來後工作已完成。完成得很好,甚至比我親自操刀還要好,完全不需要修正。幾個月前,我還在跟 AI 往返引導、修改編輯,現在我只需描述結果然後離開。
讓我舉個具體例子。我會告訴 AI:「我想開發這個應用程式。功能是這樣,視覺效果大概那樣。去構思使用者流程、介面設計,全部搞定。」它不但照做,還寫了幾萬行程式碼。接著——這是更驚人的部分——它會自行開啟該程式。它像真人一樣點擊按鈕、測試功能。如果它不喜歡某個外觀或感覺,它會自動回去修改。它像開發者一樣自我迭代、修正並精鍊,直到滿意為止。只有在它判定程式達到其標準後,它才會回來告訴我:「準備好讓你測試了。」而當我測試時,內容通常無懈可擊。
我沒有誇大其詞,這就是我這週一的工作實況。
但最令我震撼的是上週發布的 GPT-5.3 Codex。它不只是執行指令,它在做出明智的決策。它展現了一種有史以來第一次讓人感覺到「判斷力」的東西。像是某種品味,一種知道什麼才是正確決定的直覺——那是以往人們常說 AI 永遠無法擁有的。而這個模型擁有了,或者說已經接近到「區別不再重要」的地步。
我一直是 AI 工具的早期採用者。但過去幾個月徹底震驚了我。這些新模型並非增量式的改進,這完全是另一種維度的東西。
這對你很重要,即便你不在科技業。
AI 實驗室做出了一個刻意的抉擇。他們優先讓 AI 精通撰寫程式碼,因為開發 AI 本身就需要大量的程式碼。如果 AI 會寫程式,它就能開發下一個版本的自己。一個更聰明的版本,它可以寫出更好的程式碼,再開發出更聰明的版本。讓 AI 擅長程式設計是解鎖所有一切的策略。這就是他們先行目標。我的職務早於你的職務發生變化,並非因為他們鎖定軟體工程師,那只是他們鎖定目標後產生的副作用。
他們現在做到了。現在,他們正轉向其他領域。
過去一年科技業員工的經歷——看著 AI 從「好用的工具」變成「做得比我還好的人手」——正是所有人即將經歷的。法律、金融、醫療、會計、諮詢、寫作、設計、分析、客服。不是十年後。開發這些系統的人說是 1 到 5 年。有些人說更快。鑑於我在過去幾個月的所見,我認為「更快」的可能性極高。
◽️ 「但我試過 AI,覺得沒那麼厲害」
我常聽到這句話。我能理解,因為以前確實如此。
如果你是在 2023 年或 2024 年初試用 ChatGPT,覺得它會胡言亂語(幻覺)或不怎麼令人驚艷,你是對的。那些早期版本確實有限制。
那是兩年前的事了。在 AI 的時間感裡,那已經是古老歷史。
今天的模型與六個月前存在的東西相比,已經到了認不出來的地步。關於 AI 是否真的在進步還是「撞牆了」的辯論已經結束了。任何還在爭論這點的人,要麼是沒用過目前的模型,要麼是有意淡化現況,或者還在拿 2024 年的舊經驗來評價。我說這話不是為了否定誰,而是因為公眾認知與現狀之間存在巨大鴻溝,而這種鴻溝是危險的——它阻礙了人們做好準備。
部分問題在於多數人使用的是免費版的工具。免費版通常落後付費版一年以上。用免費版 ChatGPT 來評價 AI,就像拿摺疊機來評估智慧型手機的現狀。那些付費購買最佳工具並每天用於實務工作的專業人士,都知道即將發生什麼。
我想起一位律師朋友。我一直建議他在事務所使用 AI,而他總能找到理由拒絕。說它不適合他的專業、說它測出錯誤、說它不懂細節。我能理解。但我曾接到大律師事務所合夥人的諮詢,因為他們用了目前的版本,看到了未來的方向。其中一位大所的執行合夥人,每天花好幾個小時用 AI。他告訴我這感覺就像有一整群資深助理隨傳隨到。他不是在玩玩具,他是真的在工作中使用它。他告訴我一句令我深思的話:每隔幾個月,AI 的能力就會顯著提升。他說如果這條軌跡不變,他預期不久後 AI 就能完成他大部份的工作——而他是一位擁有數十年經驗的產業界領袖。他沒恐慌,但他正高度關注。
那些在產業中領先的人(真正認真嘗試的人)沒有一個在輕視這件事。他們對 AI 目前展現的能力感到震驚,並正據此調整自己的定位。
◽️ 進展速度到底有多快?
讓我用具體的數據來說明這進步的速度,因為如果你沒密切觀察,這真的很難令人置信。
在 2022 年,AI 連基礎算術都做不好,會面不改色地告訴你 7 × 8 = 54。 到 2023 年,它可以通過律師資格考試。 到 2024 年,它可以撰寫能執行的軟體,並解釋研究生等級的科學問題。 到 2025 年底,一些頂尖工程師表示,他們的大部分程式撰寫工作都已交給 AI。 在 2026 年 2 月 5 日,新模型問世,讓之前的一切感覺都像不同世代。
如果你過去幾個月沒試過 AI,目前的現狀對你來說會是面目全非。
有個名為 METR 的組織專門測量這些數據。他們追蹤 AI 在沒有人類協助下,從頭到尾完成現實世界任務所需的時間(以人類專家完成該任務的時間為準)。一年前,AI 能處理大約 10 分鐘的任務。接著進步到 1 小時、數小時。最新的測量(2025 年 11 月的 Claude Opus 4.5)顯示 AI 能完成長達 5 小時的專家任務。這個數字大約每 7 個月翻一倍,最近的數據甚至顯示可能縮短到每 4 個月翻倍。
而且,這些測量還沒包含這週發布的新模型。根據我的試用經驗,這次跳躍是非常巨大的。我預計下一次 METR 的數據會呈現出另一個大躍進。
如果你將這個趨勢延伸(目前尚無放緩跡象),我們將在明年看到能獨立工作「數天」的 AI;兩年內達到「數週」;三年內則是能處理為期「一個月」的專案。
Anthropic 執行長 Amodei 曾表示,「在幾乎所有任務上實質性地比幾乎所有人類更聰明」的 AI 模型,有望在 2026 或 2027 年出現。
讓這個觀點在你腦中沉澱一下。如果 AI 比大部分的博士還聰明,你認真覺得它做不來大部分的辦公室工作嗎?
好好思考這對你的工作意味著什麼。
◽️ AI 正在開發下一個 AI
還有一件事正在發生,我認為這是最重要但也最少人理解的發展。
2月5日 OpenAI 發布 GPT-5.3 Codex 時,在技術文件中提到:「GPT-5.3-Codex 是我們第一個在自身的創造過程中發揮關鍵作用的模型。Codex 團隊使用其早期版本來除錯自身的訓練、管理自身的部署並診斷測試結果與評估。」
請再讀一遍。AI 正協助開發其自身。
這不是對未來某天的預測。這是 OpenAI 正在告訴你,他們剛發布的 AI 就是被用來創造自己。提升 AI 智力的關鍵之一,就是將智力應用在 AI 的研發上。而 AI 現在已經聰明到能對自身的進步做出實質貢獻。
Dario Amodei 表示,他公司裡「大部分的程式碼」現在都是 AI 寫的,目前的 AI 與下一代 AI 之間的這種連鎖回饋效應正按月加強(gathering steam)。他認為,我們距離「目前一代 AI 自主開發下一代」的爆發點可能只有 1 到 2 年。
每一代都幫助開發下一代,更聰明的一代又會以更快的速度開發出再下一代。研究員稱之為「智慧爆炸」。而最了解情況的研究員們相信,這個過程已經開始了。
◽️ 這對你的工作意味著什麼?
我決定直言不諱,因為比起安慰,我覺得你更需要實話。
Dario Amodei 曾公開預測,在未來 1 到 5 年內,AI 將淘汰 50% 的初階白領工作。業內許多人甚至認為他這說法過於保守。鑑於最新模型的表現,大規模動盪散播的能力可能今年底就會就緒。雖然影響滲透到經濟體系需要一點時間,但底層能力目前就正浮現。
這與以往任何一波自動化浪潮都不同,這點極其重要。AI 並不是在取代某項特定技能,它是「認知勞動力」的通用替代品。它同時在所有面向進步。當工廠自動化時,失業勞工可以轉職為辦公室職員。當網路顛覆零售業時,勞工可以轉往物流或服務業。但 AI 沒有留下任何空隙讓你鑽,無論你轉行學什麼,AI 也在那方面同步進化。
舉幾個具體例子(但我要強調這只是例子,清單遠非全面。如果你的工作沒被提到,並不代表它是安全的,幾乎所有知識工作都受影響):
* 法律工作: AI 已能閱讀契約、摘要法條、草擬訴狀,並進行精準的法律研究,水平足以媲美初級法務(junior associates)。那位執行合夥人使用 AI 是因為它的表現勝過了他的法務們。
* 財務分析: 建立財務模型、分析數據、撰寫投資備忘錄、生成報告。AI 已能適任且進步神速。
* 寫作與內容: 行銷文案、報告、新聞報導、技術文件。其品質已達到多數專業人士無法區分 AI 與真人作品的程度。
* 軟體工程: 這是我的本科。一年前 AI 連幾行程式碼都寫不好,現在它可以寫出幾十萬行正確執行的代碼。大部份的工作已自動化:不只是簡短任務,連複雜、長達多日的專案也是。幾年後,程式開發的職位將遠少於今日。
* 醫療分析: 判讀掃描、分析檢驗報告、提出診斷建議、回顧文獻。AI 在許多領域的表現已接近或超越人類。
* 客服: 真正強大的 AI 代理(不再是五年前那種令人抓狂的機器人)正在部署中,能處理複雜的多步驟問題。
很多人透過「某些東西是安全的」來尋求慰藉。認為 AI 可以處理瑣事,但無法取代人類的判斷力、創造力、策略思考或同理心。我以前也這麼說,但我現在不確定我還信不信這一套。
最新模型做出的決策讓人感覺像是判斷力。它們展現了品味:一種具有預見何謂正確決定的感覺,而不僅僅是技術上的正確。一年前這無法想像。我目前的準則:如果某個模型今天展現出一丁點能力的苗頭,下一代就會真正精通它。這些事情是以指數級而非線性速度提升的。
AI 是否會複製深層的人類同理心?能否取代建立多年的信任關係?我不知道。但也許不會。但我已經看著人們開始依賴 AI 來尋求情感支持、建議與陪伴。這個趨勢只會成長。
誠實的答案是:任何可以在電腦上完成的工作,中長期來看都是不安全的。 如果你的工作發生在螢幕上(只要核心是閱讀、寫作、分析、決策、溝通),AI 就在路上了。時間表不是「某天」,而是「現在」。
最終,機器人也會處理體力體勞動。雖然目前還沒完全準備好,但在 AI 的字典裡,「還沒完全準備好」很快就會變成「已經發生」且速度快過所有人預期。
◽️ 你究竟該怎麼做?
我寫這些並非要讓你感到無力,而是因為我認為你目前能掌握的最大優勢就是「儘早」。儘早理解它、儘早使用它、儘早適應它。
* 認真把 AI 當一回事: 不要只把它當成搜尋引擎。訂閱付費版的 Claude 或 ChatGPT(每月 20 美元)。記得兩點:首先,確認你用的是「最強的模型」而非「預設的模型」。設定裡選最強的(現在是 GPT-5.2 或 Claude Opus 4.6,但幾個月後就不一定了)。
* 融入實際工作: 不要只問淺顯的問題。如果你是律師,丟進一份合約讓它找出對客戶不利的條款;如果你在金融界,丟入雜亂的試算表讓它建立模型;如果你是主管,放入季度數據讓它解讀其中的故事。那些領先的人正在主動尋找能自動化自己原本需要數小時工作的路徑。如果不完美,就修正指令、提供更多脈絡。軌跡指往一個方向:如果它今天做得還行,六個月後它就能做得完美。
* 這可能是你職涯中最重要的一年: 趁著大家還在無視這件事時,走進會議室展示「我用 AI 在一小時內完成了原本要三天的分析」的人,將是房間裡最有價值的人。一旦大家都學會,這個優勢就消失了。
* 財務具備韌性: 我不是理財顧問,但如果你相信動盪即將到來,增加存款、對債務保持謹慎,給自己留點彈性,這是有道理的。
* 投入 AI 難以取代的領域: 建立在多年信任上的關係、需要肉身參與的工作、具有法律責任與執照的職位、或受法規阻撓自動化進度的受監管行業。這些雖然不是永久的盾牌,但能贏得生存時間。
* 重新思考孩子的教育: 傳統的劇本(好成績、好大學、穩定的專業職缺)現在正面臨最高風險。教育依然重要,但孩子更需要學習如何與這些工具協作。培養他們的好奇心、適應力與成為「創造者」的能力。
* 培養「適應」的肌肉: 工具會過時,但快速學習新工具的肌肉記憶不會。每天花一小時與 AI 實驗,不是閱讀相關新聞,而是切實使用。如果你堅持六個月,你對未來的理解會超過 99% 的人。
◽️ 大局觀
我聚焦在工作,因為那最直接。但真實規模遠不止於此。
Amodei 曾提出一個讓我無法忘懷的思想實驗:想像在 2027 年,突然出現一個新國家,擁有五千萬公民,每個公民都比諾貝爾獎得主更聰明,思維速度比人類快 10 到 100 倍。他們不睡覺,能操作網路上的一切數位介面與機器人。國家安全顧問會怎麼看?
Amodei 說答案不言而喻:「這是我們百年來,甚至史上最嚴重的國安威脅。」
他認為我們正在建造那個國家。
如果我們處理得好,好處驚人:AI 能將百年的醫療研究壓縮到十年內完成。癌症、失智症、老化,這群研究員真心相信在我們有生之年能解決。
但如果我們處理不當,惡果同樣真實:AI 可能產生創作者無法預測或控制的行為;它能降低生化武器的製造門檻;它能協助極權政府建立永不崩塌的監控體系。
開發這項技術的人,既是這星球上最興奮的人,也是最害怕的人。他們相信這股力量強大到無法停止。那是智慧還是合理化?我不知道。
◽️ 我所知道的事
* 我知道這不是一時的高潮。技術有效,進展穩定,史上最富有的機構正投入數兆美元。
* 我知道接下來 2 到 5 年將是令人迷惘且大多數人毫無準備的時期。這已發生在我的世界,即將來到你的世界。
* 我知道那些能脫穎而出的人,是現在就開始參與的人——帶著好奇心與急迫感,而非恐懼。
這不再是餐桌上關於未來的有趣話題。未來已經在那裡了,它只是還沒來敲你的門。
它正準備敲門。
如果你也有所共鳴,請分享給你關心的人,讓他們能及早準備。